The New Triad of AI Security: Promptfoo, Strix, and CAI
Discover the new wave of open-source AI security tools: Promptfoo, Strix, and CAI. Learn how to combine them for a defense-in-depth strategy to secure your AI applications.
Discover the new wave of open-source AI security tools: Promptfoo, Strix, and CAI. Learn how to combine them for a defense-in-depth strategy to secure your AI applications.
Secure your LLMs with Google Model Armor. Learn how it works, deploy reusable Terraform modules for templates, and enforce organization-wide safety floors to prevent prompt injections.
A massive breach at Moltbook exposed 1.5M API keys and 35,000 user emails due to a simple Supabase misconfiguration. Learn how "vibe coding" led to this critical security failure.
A new prompt injection flaw in Google Gemini allowed attackers to steal private data via malicious Calendar invites. Learn how this "semantic attack" bypassed security controls and what it means for AI agent security.
Stop leaking your code to the cloud. Learn how to build a private, secure AI coding assistant using OpenCode and Docker Model Runner. Full tutorial with code samples for local RAG and secure model serving.
Is your SOC truly AI-driven? Explore the 5 levels of the AI Maturity Model for Cybersecurity, from manual operations to autonomous defense, and chart your path to resilience.