Visualising AI Security Framework
Organizations adopting advanced machine learning (ML) and generative AI (GenAI) can unlock tremendous value but face a wide range of potential risks. To help address these systematically, Databricks created the AI Security Framework (DASF), which:
- Identifies 12 AI/ML system components (from Raw Data to Platform Security)
- Details 55 distinct technical risks that appear at various stages
- Maps each risk to actionable controls or solutions you can implement on the Databricks Data Intelligence Platform — including Unity Catalog, Mosaic AI (Model Serving, Vector Search), and more
Below, we first re-display the risk mindmaps, then show solution mindmaps that group Databricks features controlling or mitigating each cluster of risks.
Data Operations: Risks Mindmap
Data Operations: Solutions Mindmap
Model Operations: Risks Mindmap
Model Operations: Solutions Mindmap
Model Deployment and Serving: Risks Mindmap
Model Deployment and Serving: Solutions Mindmap
MLOps and Platform: Risks Mindmap
MLOps and Platform: Solutions Mindmap
Putting It All Together
- Identify which DASF risks apply to your ML projects
- Pick the relevant solutions from the mindmaps (e.g., LLM guardrails if you see a lot of prompt injection risk)
- Implement and refine as new threats and new Databricks features emerge
Databricks offers a unified data and AI platform, ensuring you don’t have to stitch together a dozen fragmented tools to secure your entire ML lifecycle. Using Unity Catalog for governance and Mosaic AI for training, serving, and monitoring, you can systematically mitigate the 55 DASF risks in a single place.
If you like that article, consider buying me coffee :-) https://ko-fi.com/hubertdudek
All diagrams and content are informed by the Databricks AI Security Framework (DASF): https://www.databricks.com/resources/whitepaper/databricks-ai-security-framework-dasf
Charts are generated by using https://www.mermaidchart.com/