Microsoft 365 Copilot, the AI tool built into Microsoft Office workplace applications including Word, Excel, Outlook, PowerPoint, and Teams, harbored a critical security flaw that, according to researchers, signals a broader risk of AI agents being hacked.
The flaw,
revealed by AI security startup Aim Security and
shared with Fortune, is the first known “zero-click” attack on an AI agent, an AI that acts autonomously to achieve specific goals.
The nature of the vulnerability means that the user doesn’t need to click anything or interact with a message for an attacker to access sensitive information from apps and data sources connected to the AI agent.
In the case of Microsoft 365 Copilot, the vulnerability lets a hacker trigger an attack simply by sending an email to a user, with no phishing or malware needed.
Instead, the exploit uses a series of clever techniques to turn the AI assistant against itself, bypassing Copilot’s built-in protections ensuring that only users can access their own files.
The researchers at Aim Security dubbed the flaw “EchoLeak.” Microsoft told
Fortune that it has already fixed the issue and that its customers were unaffected.
The bigger concern? That the flaw could apply to other kinds of agents—from Anthropic’s MCP to platforms like Salesforce’s Agentforce.
—Sharon Goldman