false

Perspectives Home / CISO CIRCLE

When AI Joins the Chat: EchoLeak, End-to-End Encryption, and Silent Data Exposure

AI and private communications can coexist — as long as transparency, opt-in control, and privacy are part of the conversation.

AI assistants are popping up everywhere, from search engines to photo editors and productivity tools. They’re also entering our messaging apps and emails. Designed to act as built-in helpers, AI assistants summarize conversations, suggest replies, and even provide live feedback while you type.

 

Now here’s where things get interesting: many of these features are being used on platforms that contain your organization’s sensitive data and in commercial messaging apps that promise end-to-end encryption (E2EE).

 

This raises the question: In a supposedly private conversation, is AI acting as a silent middleman?

 

AI assistants disrupt E2EE

This threat became more clear recently as researchers at Aim Security disclosed a critical vulnerability dubbed EchoLeak, which affects Microsoft 365 Copilot, an AI assistant embedded in tools like Outlook, Teams, and Excel. It allows attackers to covertly extract sensitive data through the assistant, without user interaction – essentially using it as a conduit for zero-click data exfiltration.

 

Long before AI assistants, end-to-end encryption (E2EE) was the go-to method for secure, private communications between two parties. In E2EE, messages are encrypted on the sender’s device using a unique key and can only be decrypted by the recipient’s device, which holds the corresponding private key. Even if the message travels through infrastructure owned by the provider, those intermediaries can only see encrypted data. That’s the promise of E2EE: your messages stay between you and the person you’re communicating with, not any app provider, government, or person in between.

 

Traditional E2EE model.

 

AI assistants are disrupting this model. For an AI assistant to, say, summarize a long message for you or suggest a reply, it needs to access the message content after it’s decrypted.

 

E2EE  model with AI processing in the flow.

 

This means something other than your employees, customers, and business partners are reading your messages. Is that something still part of the endpoint? Or has AI quietly added a new party to the conversation?

per-newsletter-promo-v3-380x253

Resilience starts with strategy

Stay in the know with executive insights on digital resilience, delivered straight to your inbox.

 

Where is the endpoint?

A recent paper, How to Think About End-to-End Encryption and AI, raises a lot of important questions and suggests that incorporating AI into messaging forces us to either redefine  “endpoint” or extend the boundaries of the encryption process.

 

If the AI is running entirely on your device, in theory, it could still be part of the trusted endpoint. But this is not always the case. More powerful AI models, such as those for complex tasks, rely on the cloud. When your message is sent to an external server for AI processing – even temporarily – a new, unintended entity is reading your content.

 

Even when the AI is hosted in secure, isolated environments, the classic E2EE model is compromised. The platform may not store your message or “read” it in a traditional sense, but the content has now passed through a system that is not the target recipient. If E2EE is meant to give users confidence that their data stays private, the introduction of AI assistants tarnishes this trust.

 

Redefining privacy expectations

To be clear, this does not mean that AI and privacy are mutually exclusive. When implemented thoughtfully, AI features can make life easier, while also keeping data secure. But the key word is thoughtfully.

 

Consider the following when assessing how AI and private communications can coexist:

 

  • Transparency: Users should know exactly what data is being processed, where it’s being processed, and whether the assistant is hosted locally or in the cloud.
  • True opt-in control: AI features should be off by default, especially in encrypted messaging apps. If users want to use them, they should actively opt in, with a clear explanation of what that means for their privacy.
  • Updated definitions of E2EE: If platforms are extending the definition of an “endpoint” to include secure cloud environments or AI services, they should say so.
  • Collective privacy: If one participant of a group chat is utilizing an AI assistant, are they sharing everyone’s messages with that tool?

 

It’s not just a privacy issue, it’s a business risk

EchoLeak is a red flag for business leaders. It demonstrated that AI assistants can silently exfiltrate sensitive company data. A single email, phrased to bypass prompt injection protections, could cause an AI assistant like Copilot to retrieve internal organizational data and embed it into a disguised image link. From there, the victim’s browser would unknowingly send the data to an attacker-controlled server—no malware, no endpoint compromise, just abuse of how AI interprets and responds to context.

 

Even though Microsoft addressed the issue before public disclosure, the vulnerability received a critical 9.3 CVSS score, highlighting just how severe the implications are for organizations working to protect their data. As AI assistants are being further integrated to handle proprietary conversations such as upcoming company plans or client data, the possibility of a breach without human error or user interaction will only grow.

 

There is also the question of collective exposure. If one person in a group chat enables an AI assistant like Copilot, could they inadvertently share everyone’s messages with that tool? Imagine a conversation about a confidential product roadmap, now processed by a third-party cloud model. Are those messages still protected? Or has sensitive data crossed a line no one realized existed?

 

A new consideration for defenders

Defenders have long focused on human error, but AI assistants handling decrypted content are shifting the threat model. When large language models (LLMs) process sensitive information, they become part of the attack surface. The EchoLeak vulnerability exposed a new category of risk called LLM scope violations, where AI systems unintentionally leak internal data without user intent or awareness. These attacks don’t rely on malware or phishing, they exploit how AI interprets prompts and retrieves context. Defenders need to start treating AI tools as active parts of the security stack. That includes monitoring where AI processing happens, ensuring guardrails are in place for both inputs and outputs, and evaluating how third-party AI features interact with sensitive workflows. If AI assistants can access shared documents or messages, especially in group settings, organizations need to know whether one user’s decision to enable AI puts others at risk. These systems must be audited, restricted where necessary, and included in broader threat modeling efforts moving forward.

 

The path forward

The EchoLeak vulnerability made it clear that AI assistants can be manipulated to leak sensitive data. It also revealed a broader category of risk: LLM scope violations, where large language models unintentionally expose internal information based on how they interpret prompts and retrieve context.

 

End-to-end encryption was built to keep messages private. But that privacy starts to break down when decrypted content is passed to an AI assistant. Even if processed securely, the message is no longer limited to the original sender and recipient.

 

This is not an argument against AI, when done well, these tools improve usability, accessibility, and productivity. But it is an argument for doing it right.

 

That means giving users true opt-in control, offering clear visibility into when and how AI is used, and reexamining what secure communications should mean in an era where apps don't just deliver messages, they interpret them too.

 

The way we define privacy has always evolved with the technology we use. As AI becomes part of how we communicate, we have a responsibility to ensure it doesn't also become the quiet third party in conversations we thought were just between us.

Read more Perspectives by Splunk

APRIL 23, 2025  •  6 minute read

The New Face of Fraud in Finance

 

Resilience starts with knowing who’s in charge of the algorithm.

FEBRUARY 27, 2025  •  6 Minute Read

When to Choose GenAI, Agentic AI, or None of the Above

 

Not every use case calls for AI. Learn how to choose the right model, before it chooses for you.

MARCH 25, 2025  •  7 minute read

The AI Genie is Out of the Bottle. Now What?

 

AI is transforming software development, security, and decision-making — but at what cost?

Get more perspectives from security, IT and engineering leaders delivered straight to your inbox.