University research finds popular GenAI web browser assistants are collecting and sharing sensitive user information

New research from UCL, UC Davis and Mediterranea University of Reggio Calabria has raised privacy concerns around the use of popular generative AI web browser assistants.

The researchers analyzed ten of the most popular generative AI browser extensions, including Chat GPT for Google, Merlin, and Microsoft Copilot. It uncovered widespread tracking, profiling, and personalization practices that it said raise serious concerns.

Analysis showed that several AI web browser assistants transmitted full webpage content, including any confidential information visible on screen, to their servers. This included banking details and health data.

Exensions including Sider and TinaMind were found to share user questions and information that could identify them with platforms including Google Analytics, enabling cross-site tracking.

ChatGPT for Google, Copilot, Monica, and Sider were able to infer user attributes such as age, gender, income, and interests and used this information to personalize responses, even across different browsing sessions. 

None of the companies behind the web browser assistants mentioned above responded to ETIH’s request for comment on the findings of this research.

Perplexity was the only AI web browser assistant that the researcher’s say did not show any evidence of profiling or personalization.

“Though many people are aware that search engines and social media platforms collect information about them for targeted advertising, these AI browser assistants operate with unprecedented access to users’ online behaviour in areas of their online life that should remain private. While they offer convenience, our findings show they often do so at the cost of user privacy, without transparency or consent and sometimes in breach of privacy legislation or the company’s own terms of service,” comments Dr Anna Maria Mandalari, senior author of the study, from UCL’s Electronic & Electrical Engineering.

“This data collection and sharing is not trivial. Besides the selling or sharing of data with third parties, in a world where massive data hacks are frequent, there’s no way of knowing what’s happening with your browsing data once it has been gathered.”

The study’s authors say their research shows an urgent need for regulatory oversight of AI browser assistants in order to protect users’ personal data. 

Some of the assistants tested were found to violate US data protection laws. While study was conducted in the US, so compatibility with UK/EU GDPR was not included in its remit, its authors say the findings would likely be a violation of these laws.

Previous
Previous

Tech integration and teacher adaptability behind rising A-level performance, says Kahoot!

Next
Next

Edia partners with schools districts in San Francisco, Cincinnati, Wichita, and Denver to improve mathematics proficiency with AI