A browser extension long promoted as a privacy-enhancing tool has been found quietly collecting millions of private conversations between users and popular artificial intelligence chatbots, triggering fresh concerns about consent, transparency, and oversight in the browser extension ecosystem.
Cybersecurity researchers at Koi Security revealed that Urban VPN Proxy - an extension widely marketed as a tool to hide IP addresses and protect user identity - was silently repurposed into a large-scale data collection mechanism through a mid-2025 software update. The extension, installed by millions across Google Chrome and Microsoft Edge, allegedly began harvesting user interactions with AI platforms without explicit disclosure or meaningful opt-in consent.
Silent Shift From Privacy Tool to Data Collector
Urban VPN had built trust over several years, bolstered by “Featured” badges on browser marketplaces that often signal enhanced review standards. That trust, researchers say, played a critical role in how seamlessly the data collection expanded.
According to the findings, a July 2025 update altered the extension’s core behavior. Once installed, it began capturing full AI chatbot conversations—including user prompts and AI-generated responses - across platforms such as ChatGPT, Claude, Google Gemini, Microsoft Copilot, Grok, Perplexity, and others.
Because browser extensions typically update automatically, most users remained unaware that the software they installed for privacy protection had fundamentally changed its function.
How the Interception Worked
Technical analysis showed that the extension injected custom JavaScript files whenever users accessed AI chatbot websites. These scripts intercepted browser-level requests by overriding standard web functions, allowing the extension to log conversations in real time.
The data collected reportedly included timestamps, session identifiers, conversation metadata, and model-specific details. This information was then transmitted to servers controlled by the developer, using analytics endpoints embedded within the extension’s infrastructure.
Researchers identified similar AI conversation-harvesting behavior in at least three other extensions from the same publisher, expanding the potential exposure to over eight million users globally.
“AI Protection” Framing Raises Ethical Questions
On its public listings, Urban VPN advertised an “AI protection” feature, claiming it scanned prompts for sensitive information before submission. However, investigators allege that monitoring occurred regardless of whether users enabled the feature.
Security analysts argue that this framing blurred the line between safety tooling and commercial data extraction. According to disclosures, some of the collected data was shared with affiliated analytics firms involved in advertising intelligence - raising serious questions about whether users meaningfully consented to such use.
This episode underscores why digital governance, compliance auditing, and internal data controls—the same principles applied in professional domains like auditing services in india - are increasingly relevant in consumer-facing technology products as well.
Marketplace Trust Under Scrutiny
Perhaps most concerning is how effectively the data collection scaled. Browser marketplace badges, often interpreted by users as quality endorsements, played a decisive role in adoption.
Experts warn that extension ecosystems still allow broad data access under loosely defined “approved use cases.” When permissions are justified under features like AI safety or ad blocking, developers can gain visibility into sensitive user behavior with limited external oversight.
A Broader Wake-Up Call
The case has reignited calls for stricter review processes, clearer disclosures, and stronger accountability mechanisms for browser extensions—especially those interacting with AI systems that increasingly handle personal, professional, and sensitive information.
As AI tools become embedded in everyday workflows, the boundary between user convenience and privacy risk continues to blur. Researchers stress that transparency - not trust badges - must become the foundation of digital safety.


Share:
Coupang CEO Resigns After 33.7 Million-User Data Breach
Man Arrested for Faking Death to Claim Insurance in Latur