cloudauditcontrols.com Ads.txt file


















Cloud Audit Controls



















































































































































Monday, April 7, 2025












Databricks AI Security Framework (DASF) | Third-party Tools






Great work - Amazing work - by the team at Databricks. Nice job!

Databricks AI Security Framework (DASF) | Databricks

This link leads to a PDF that selflessly has links to a LOT of information. Thank you for including them!

Here's one such list. I'm storing it here as a quick yellow sticky. Go check out their work for more. 






































































































































































Tool Category



Tool URL



Tool
Description



Model
Scanners



HiddenLayer Model Scanner



A tool that
scans AI models to detect embedded malicious code, vulnerabilities, and
integrity issues, ensuring secure deployment.



Fickling



An
open-source utility for analyzing and modifying Python pickle files, commonly
used for serializing machine learning models.



Protect AI Guardian



An
enterprise-level tool that scans third-party and proprietary models for
security threats before deployment, enforcing model security policies.



AppSOC’s AI Security
Testing solution



AppSOC’s AI
Security Testing solution helps in proactively identifying, and assessing the
risks from LLM models by automating model scanning, simulating adversarial
attacks, and validating trust in connected systems, ensuring the models and
ecosystems are safe, compliant, and deployment-ready



Model
Validation Tools



Robust
Intelligence Continuous Validation



A platform
offering continuous validation of AI models to detect and mitigate
vulnerabilities, ensuring robust and secure AI deployments.



Protect AI Recon



A product
that automatically validates LLM Model performance across common industry
framework requirements (OWASP, MITRE/ATLAS).



Vigil
LLM security scanner



A tool
designed to scan large language models (LLMs) for security vulnerabilities,
ensuring safe deployment and usage.



Garak Automated
Scanning



An automated
system that scans AI models for potential security threats, focusing on
detecting malicious code and vulnerabilities.



HiddenLayer AIDR



A solution
that monitors AI models in real time to detect and respond to adversarial
attacks, safeguarding AI assets.



Citadel Lens



A security
tool that provides visibility into AI models, detecting vulnerabilities and
ensuring compliance with security standards.



AppSOC’s AI Security
Testing solution



AppSOC’s AI
Security Testing solution helps in proactively identifying, and assessing the
risks from LLM models by automating model scanning, simulating adversarial
attacks, and validating trust in connected systems, ensuring the models and
ecosystems are safe, compliant, and deployment-ready



AI Agents



Arhasi R.A.P.I.D



A platform
offering rapid assessment and protection of AI deployments, focusing on
identifying and mitigating security risks.



Guardrails
for LLMs



NeMo Guardrails



A toolkit for
adding programmable guardrails to AI models, ensuring they operate within
defined safety and ethical boundaries.



Guradrails AI



A framework
that integrates safety protocols into AI models, preventing them from
generating harmful or biased outputs.



Lakera Guard



A security
solution that monitors AI models for adversarial attacks and vulnerabilities,
providing real-time protection.



Robust
Intelligence AI Firewall



A protective
layer that shields AI models from adversarial inputs and attacks.



Protect AI Layer



Layer
provides LLM runtime security including observability, monitoring, blocking
for AI Applications. The enterprise grade offering brought to you by the same
team that built the industry leading open source solution LLM Guard.



Arthur
Shield



A monitoring
solution that tracks AI model performance and security, detecting anomalies
and potential threats in real time.



Amazon Guardrails



A set of
safety protocols integrated into Amazon's AI services to ensure models
operate within ethical and secure boundaries.



Meta
Llama Guard



Meta
implemented security measures to protect their Llama models from
vulnerabilities and adversarial attacks.



Arhasi R.A.P.I.D



A platform
offering rapid assessment and protection of AI deployments, focusing on
identifying and mitigating security risks.



DASF
Validation and Assessment Products and Services



Safe Security



SAFE One
makes cybersecurity an accelerator to the business by delivering the industry's
only data-driven, unified platform for managing all your first-party and
third-party cyber risks.



Obsidian



Obsidian
Security combines application posture with identity and data security,
safeguarding SaaS.



EQTY Labs



EQTY Lab
builds advanced governance solutions to evolve trust in AI.



AppSOC



Makes
Databricks the most secure AI platform with real-time visibility, guardrails,
and protection.



Public AI
Red Teaming Tools



Garak



An automated
scanning tool that analyzes AI models for potential security threats,
focusing on detecting malicious code and vulnerabilities.



Protect AI Recon



A product
with a full suite of Red Teaming options for AI applications, including a
library of common attacks, human augmented attacks, and LLM generated scans;
complete with mapping to common industry frameworks like OWASP and
MITRE/ATLAS.



PyRIT



A
Python-based tool for testing the robustness of AI models against adversarial
attacks, ensuring model resilience.



Adversarial
Robustness Toolbox (ART)



An
open-source library that provides tools to assess and improve the robustness
of machine learning models against adversarial threats.



Counterfeit



A tool
designed to test AI models for vulnerabilities by simulating adversarial
attacks, helping developers enhance model security.



ToolBench



A suite of
tools for evaluating and improving the security and robustness of AI models,
focusing on detecting vulnerabilities.



Giskard-AI llm scan



A tool that
scans large language models for security vulnerabilities, ensuring safe
deployment and usage.



Hidden Layer - Automated Red Teaming for AI



A service
that simulates adversarial attacks on AI models to identify vulnerabilities
and strengthen defenses.



Fickle scanning tools



Utilities designed
to analyze and modify serialized Python objects, commonly used in machine
learning models, to detect and mitigate security risks.



CyberSecEval 3



A platform
that evaluates the security posture of AI systems, identifying
vulnerabilities and providing recommendations for mitigation.



Parley



A tool that
facilitates secure and compliant interactions between AI models and users,
ensuring adherence to safety protocols.



BITE



A framework
for testing the security and robustness of AI models by simulating various
adversarial attack scenarios.



Purple Llama



Purple Llama
is an umbrella project that over time will bring together tools and evals to
help the community build responsibly with open generative AI models. The
initial release will include tools and evals for Cyber Security and
Input/Output safeguards but we plan to contribute more in the near future.


 
















Wednesday, April 2, 2025












Yes. There's a Lot. The DoD Cybersecurity Policy Chart - CSIAC







The DoD Cybersecurity Policy Chart - CSIAC

Quoting directly from their website. They said it well enough.

"The goal of the DoD Cybersecurity Policy Chart is to capture the tremendous scope of applicable policies, some of which many cybersecurity professionals may not even be aware of, in a helpful organizational scheme. The use of colors, fonts, and hyperlinks is designed to provide additional assistance to cybersecurity professionals navigating their way through policy issues in order to defend their networks, systems, and data.

At the bottom center of the chart is a legend that identifies the originator of each policy by a color-coding scheme. On the right-hand side are boxes identifying key legal authorities, federal/national level cybersecurity policies, and operational and subordinate level documents that provide details on defending the DoD Information Network (DoDIN) and its assets. Links to these documents can also be found in the chart."













Thursday, January 16, 2025











Training Links






Helpful post! This is from LinkedIn.

🚨 SHARE SOMEONE NEEDS IT 🚨
💥 𝐅𝐑𝐄𝐄 𝐈𝐓 𝐨𝐫 𝐂𝐲𝐛𝐞𝐫𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐭𝐫𝐚𝐢𝐧𝐢𝐧𝐠!💥
Huge list of computer science resources (This one is great! Some links might not work, but I'm sure you can find them by doing a quick search) - https://lnkd.in/gQvxbypj

🔗 CompTIA Security+ - https://lnkd.in/gyFy_CG9
🔗 CISSP - https://lnkd.in/gUFjihpJ
🔗 Databases - https://lnkd.in/gWQmYwib
🔗 Penetration testing - https://lnkd.in/gAdgyY6h

🔗 Web application testing - https://lnkd.in/g5FkXWej

🔗 Weekly HackTheBox series and other hacking videos - https://lnkd.in/gztivT-D

🔗 Resources for practicing what you learned:

🔗 Network simulation software https://lnkd.in/gRMak7_x

🔗 Virtualization software https://lnkd.in/gFkyFVvF

🔗 Linux operating systems
https://lnkd.in/g2M__A5n
https://lnkd.in/gyc4R_F7
https://lnkd.in/gSiHYRNg
https://lnkd.in/g5GsUT7H

🔗 Microsoft Operating Systems
https://lnkd.in/gP3nxKpZ

🔗 Networking - https://lnkd.in/gNm8RhtS

🔗 More Networking - https://lnkd.in/ghqw2sHZ

🔗 Even More Networking - https://lnkd.in/g4fp8WFa

🐾 Linux - https://lnkd.in/g7KJBUYd

🐾 More Linux - https://lnkd.in/gUK8PU4p

🔗 Windows Server - https://lnkd.in/gWUTmN-5

🔗 More Windows Server- https://lnkd.in/gsWZQnwj

🔗 Python - https://lnkd.in/g_NpsqEM

🔗 Golang - https://lnkd.in/gmwz4ed5
🔗 Capture the flag
https://lnkd.in/gpnYs5Qj
https://www.vulnhub.com/
https://lnkd.in/gn2AEYhw
https://lnkd.in/g5FkXWej
Full credit :G M Faruk Ahmed, CISSP, CISA





























































Ads.Txt Alerts - A trading name of Red Volcano Limited

Waterloo Buildings, Second Floor Rear, 53 London Road, Southampton, Hampshire, United Kingdom, SO15 2AD

© Red Volcano 2020. All Rights Reserved.