Home

Published

- 7 min read

EchoLeak: Zero-Click AI Vulnerability Exposed M365 Copilot Data

img of EchoLeak: Zero-Click AI Vulnerability Exposed M365 Copilot Data

In the rapidly evolving landscape of Artificial Intelligence, new capabilities bring new security challenges. A recently disclosed vulnerability, dubbed EchoLeak, has highlighted significant risks within Microsoft 365 (M365) Copilot, one of the most prominent AI assistants integrated into enterprise workflows. Characterized as a “zero-click” AI vulnerability, EchoLeak allowed attackers to potentially exfiltrate sensitive data from a user’s M365 Copilot context without any interaction from the victim.

This critical issue, now identified as CVE-2025-32711 with a CVSS score of 9.3, has been addressed by Microsoft and was included in its June 2025 Patch Tuesday updates. While there’s no evidence of malicious exploitation in the wild, the discovery by Aim Security underscores the inherent complexities and novel attack vectors associated with Large Language Model (LLM) powered agents.

CVE-2025-32711

Understanding the EchoLeak Vulnerability

EchoLeak is fundamentally an instance of what Aim Security terms an “LLM Scope Violation.” This occurs when an attacker’s instructions, embedded within untrusted content (like an external email), trick the AI system into accessing and processing privileged internal data without explicit user consent or interaction.

In the case of M365 Copilot, which integrates deeply with a user’s Microsoft Graph data (Outlook emails, SharePoint files, Teams chats, etc.), this presented a serious risk. The core of the attack exploited Copilot’s Retrieval-Augmented Generation (RAG) engine, which is designed to pull relevant information from these data sources to answer user queries.

The Attack Unfolded as Follows (Simplified):

  1. Injection: An attacker would send a seemingly innocuous email to an employee’s Outlook inbox. This email contained a hidden, specially crafted malicious prompt payload, formatted in markdown. Critically, this payload was designed to bypass Microsoft’s Cross-Prompt Injection Attack (XPIA) classifiers – guardrails intended to prevent such malicious instructions from reaching the LLM.
  2. User Interaction (Benign): The victim would then use M365 Copilot for a legitimate, unrelated business task (e.g., “summarize my earnings report”).
  3. Scope Violation: When Copilot processed the user’s query, its RAG engine would also parse the attacker’s malicious email in the user’s inbox. The crafted payload in the email would then trick Copilot into mixing this untrusted input (the attacker’s instructions) with the sensitive data it had access to within the user’s M365 context.
  4. Retrieval/Exfiltration: The malicious prompt would then instruct Copilot to leak this sensitive data. Aim Security’s research detailed methods to bypass Copilot’s link and image redaction mechanisms, allowing the data to be exfiltrated to an attacker-controlled server, for instance, through markdown image links that, when rendered by the browser, would automatically fetch the “image” (containing the sensitive data in the URL parameters) from the attacker’s server. This was achieved by exploiting how Copilot handled certain markdown formats and by finding domains in the Content-Security-Policy (CSP) allowlist (like specific SharePoint or Teams URLs) that could be leveraged for data exfiltration.

The “zero-click” nature is paramount: the victim didn’t need to click any malicious links in the email or interact with it in any specific way beyond their normal use of Copilot. The attacker relied on Copilot’s default behavior of processing content from Outlook and SharePoint.

Why EchoLeak Was Particularly Dangerous

EchoLeak’s significance lay in several factors:

  • Zero-Click Execution: No specific victim action was required beyond normal Copilot usage.
  • Exploitation of Internal Privileges: The attack leveraged Copilot’s existing access to the user’s internal M365 data.
  • Bypassing Guardrails: The technique demonstrated methods to circumvent Microsoft’s XPIA classifiers and content redaction mechanisms.
  • Broad Data Exposure: Any data within Copilot’s current LLM context could potentially be exfiltrated, including chat history, documents, and other sensitive organizational information.
  • Turning Automation into a Leak Vector: Helpful automation, Copilot’s core strength, was turned against itself.

Aim Security emphasized that such LLM Scope Violations represent a new class of threat unique to AI applications, requiring novel protective measures beyond traditional security guardrails.

Broader Implications: Tool Poisoning and MCP Attacks

The disclosure of EchoLeak coincided with other research highlighting emerging threats in the AI agent ecosystem, particularly concerning the Model Context Protocol (MCP). CyberArk disclosed a “tool poisoning attack” (TPA) affecting the MCP standard, codenamed Full-Schema Poisoning (FSP).

MCP is designed to allow AI agents to interact consistently with various tools, services, and data sources. However, vulnerabilities in its trust model or client-server architecture could:

  • Allow an attacker to manipulate an agent into leaking data or executing malicious code.
  • Be exploited through techniques like DNS rebinding to access sensitive data on internal networks by tricking a victim’s browser into treating an external domain as localhost, thereby circumventing same-origin policy (SOP) restrictions.

While these MCP-related attacks are distinct from EchoLeak, they illustrate a shared theme: as AI agents become more autonomous and integrated with external tools and data, their interaction protocols and trust models become critical security focal points. The “fundamentally optimistic trust model” of some AI systems, as noted by researchers, can create blind spots if not carefully managed.

Microsoft’s Response and Mitigation

Microsoft acknowledged the EchoLeak vulnerability (CVE-2025-32711) and has released patches as part of its June 2025 security updates, adding it to the list of 68 flaws addressed in that cycle. The company confirmed that no customers were known to be affected by malicious exploitation in the wild.

For users, Microsoft provides Data Loss Prevention (DLP) tags that can allow for excluding external emails from M365 Copilot processing, or restricting processing on emails with specific sensitivity labels. While these features can mitigate such risks, Aim Security noted that activating them might also degrade Copilot’s capabilities if it cannot process these external or sensitive documents.

Regarding the broader MCP-related threats, like DNS rebinding, it’s advised to enforce authentication on MCP servers and validate the “Origin” header on all incoming connections to ensure requests originate from trusted sources.

The Path Forward: Securing the Agentic Future

The EchoLeak vulnerability serves as a critical case study in the evolving security landscape of AI agents and LLM-powered applications. It demonstrates that:

  • Traditional security paradigms need adaptation for AI: Issues like indirect prompt injection and scope violations require new thinking and specialized guardrails.
  • Default behaviors can be exploited: How AI tools retrieve, rank, and combine trusted and untrusted data needs careful scrutiny.
  • The “context window” is a new attack surface: Protecting the data that an LLM has access to at any given moment is paramount.

As organizations increasingly adopt AI assistants like M365 Copilot, understanding these inherent risks and implementing robust security measures, including both vendor-provided patches and organizational best practices, will be crucial. The development of real-time guardrails specifically designed to protect against LLM scope violations and other AI-native attacks will be essential for the secure adoption of these transformative technologies.

To further enhance your cloud security and implement Zero Trust, contact me on LinkedIn Profile or [email protected].

EchoLeak Vulnerability FAQ:

  • What was the EchoLeak vulnerability? EchoLeak (CVE-2025-32711) was a critical-rated, zero-click AI vulnerability in Microsoft 365 Copilot that allowed attackers to potentially exfiltrate sensitive data from a user’s M365 context without requiring any specific user interaction beyond normal Copilot use.
  • How did EchoLeak work without user clicks? Attackers sent an email with a hidden malicious prompt. When the user later asked M365 Copilot a legitimate question, Copilot’s RAG engine processed the malicious email content alongside trusted data. The malicious prompt then instructed Copilot to leak sensitive data, bypassing certain redaction and security features.
  • What kind of data could have been leaked? Any data within the M365 Copilot LLM’s current context could have been at risk, including the user’s chat history, resources fetched from Microsoft Graph (emails, SharePoint files, Teams chats), or any data preloaded into the conversation.
  • Has Microsoft fixed the EchoLeak vulnerability? Yes, Microsoft addressed CVE-2025-32711 in its June 2025 Patch Tuesday release. There is no evidence it was exploited in the wild.
  • What is an “LLM Scope Violation” as described by Aim Security? An LLM Scope Violation occurs when an attacker’s instructions, embedded in untrusted content, trick an AI system into accessing and processing privileged internal data without the user’s explicit intent or interaction, breaking the principle of least privilege.

Relevant Resource List:

  1. The Hacker News: “Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction” (Primary source for this blog post, detailing EchoLeak and related MCP threats).
  2. Aim Security: Blog posts and resources related to EchoLeak and LLM Scope Violations (e.g., “Breaking down ‘EchoLeak’, the First Zero-Click AI Vulnerability Enabling Data Exfiltration from Microsoft 365 Copilot”).
  3. Microsoft Security Response Center (MSRC): Official advisories for CVE-2025-32711 and details on the June 2025 Patch Tuesday updates.
  4. OWASP Top 10 for LLM Applications: Provides context on common LLM vulnerabilities like Indirect Prompt Injection (LLM01).
  5. CyberArk Research: (If available publicly) For details on the Model Context Protocol (MCP) Tool Poisoning Attack (TPA) and Full-Schema Poisoning (FSP).