Discover how adversaries use AI distillation attacks and "hydra clusters" to steal frontier AI capabilities, and how cybercriminals weaponize LLMs for global operations.
Discover how adversaries use AI distillation attacks and "hydra clusters" to steal frontier AI capabilities, and how cybercriminals weaponize LLMs for global operations.
Chain-of-Thought (CoT) Forgery is a sophisticated attack where hackers plant fake reasoning to trick AI models into bypassing safety guardrails. Learn how "Authority by Format" works and how to secure your LLMs.
Microsoft's new AI Red Team tool automates the discovery of risks in LLMs. Learn how this agentic system finds vulnerabilities like jailbreaking and prompt injection before attackers do.