A recently discovered Artificial Intelligence (AI) vulnerability, called EchoLeak, could have allowed hackers to steal sensitive data from the Microsoft 365 Copilot. Researchers from Aim Labs uncovered the flaw in January 2025 and reported it to Microsoft, which quickly fixed the issue in May.

Microsoft assigned the flaw the identifier CVE-2025-32711 and classified it as critical. However, the company confirmed that no customers were affected, as there was no evidence of real-world attacks. Since the fix was applied server-side, users didn’t need to take any action.

The vulnerability exploited Microsoft 365 Copilot’s natural language processing capabilities in a clever but concerning way. Hackers would send a standard-looking business email containing hidden prompts disguised as normal text. These prompts were carefully crafted to appear harmless to human readers but contained specific instructions for the AI system. When an employee later used Copilot for work tasks, the AI would automatically scan and incorporate the email’s content (including the hidden malicious prompts) into its response generation process.

Because Microsoft 365 Copilot is designed to help with work-related queries by accessing emails and documents, it unknowingly followed these embedded instructions. The AI could then be tricked into revealing confidential information, which would be packaged into what appeared to be legitimate links or document references. The real danger came from how seamlessly this happened: no suspicious downloads, no warning messages, just the AI system quietly following what it thought were valid user requests while actually compromising sensitive data.

The attack was particularly effective because it leveraged Microsoft’s own trusted services for data transfer, making the malicious activity blend in with normal business communications. This bypassed traditional security measures that typically flag connections to external servers, as the data was being routed through approved Microsoft channels like Teams or SharePoint.

Microsoft’s security systems blocked most external domains, but attackers could still use trusted Microsoft services like Teams and SharePoint to sneak out data.

While EchoLeak has been patched, it highlights a growing risk as AI becomes more deeply integrated into business tools. Traditional security measures struggle to keep up with these new threats.

As AI-powered tools become more common, businesses must stay vigilant against these evolving threats. Microsoft’s quick response shows the importance of responsible disclosure and prompt fixes in AI security.

Via: Bleeping Computer

Leave a comment

Your email address will not be published. Required fields are marked *