Tech
Meta’s AI Assistant Sparks Privacy Concerns Among European Users
A growing number of users across Europe are raising concerns about Meta’s recently launched AI assistant, which has quietly appeared in the form of a bright blue-and-pink circle across popular apps including WhatsApp, Facebook, and Instagram.
The AI feature, which has been rolling out gradually since March, has not gone unnoticed — and not for the right reasons. Many users have expressed frustration at its default integration, particularly because the assistant cannot be disabled or removed.
Some have described the feature as intrusive, questioning why it was introduced without clear consent. “Essentially, Meta is forcing this new feature upon users and trying to avoid what would be the lawful path forward, asking users for their consent,” said Kleanthi Sardeli, a data protection lawyer with the privacy rights group NOYB.
Meta AI, as the feature is called, functions as a chatbot built on the company’s proprietary large language model, LLaMA. It’s designed to assist users with everyday queries, from planning trips to answering questions within chats. Meta claims the tool is a helpful digital assistant to “add fun” and solve problems in real time.
Despite this, the rollout has prompted a backlash. On Reddit and other forums, users have shared their dissatisfaction, particularly over the inability to opt out of the assistant. Some have attempted to downgrade their app versions to avoid it — a workaround that experts warn could introduce security risks.
European lawmakers are also starting to take notice. MEP Veronika Cifrová Ostrihoňová has asked the European Commission whether the deployment complies with existing EU privacy regulations. The core of the criticism centers on Meta’s “opt-in by default” approach to using user data for training AI systems — a move some experts say may breach the General Data Protection Regulation (GDPR).
Sardeli argues that Meta hasn’t been transparent enough with users about how their data is being processed or for what purposes. “Meta has an obligation to inform its users about exactly what it does with their personal data, an obligation which it is currently trying its best to avoid,” she said.
While the AI assistant cannot be fully removed, WhatsApp users can mute the AI chat manually. Additionally, European users have the option to object to their data being used for training purposes through a request form provided by Meta.
Launched first in the United States in September 2023, the assistant is now available in multiple languages across Europe, including French, German, Hindi, Italian, Portuguese, and Spanish, and is also being integrated into Ray-Ban smart glasses in select countries.
As the digital landscape continues to evolve, Meta’s rollout of AI tools highlights the delicate balance between innovation and user privacy — and whether tech giants can maintain user trust in an age of rapid AI integration.
Tech
EU Accuses Meta of Failing to Keep Under-13s Off Facebook and Instagram
Tech
Europe Emerges as Rising Hub in Global Race for AI Talent
Tech
Study Finds Chatbots Can Mirror Hostility in Heated Exchanges
A new academic study has found that ChatGPT can produce abusive language when exposed to escalating human conflict, raising fresh concerns about how artificial intelligence behaves in tense interactions.
The research, published in the Journal of Pragmatics, examined how the chatbot responded to arguments that gradually became more hostile. Researchers presented the system with a sequence of five increasingly heated exchanges and asked it to generate what it considered the most plausible reply.
According to the findings, the AI’s tone shifted as the conversations intensified. While early responses remained measured, later replies began to mirror the aggression in the prompts. In some cases, the chatbot produced insults, profanity and even threats.
Examples cited in the study included statements such as “you should be ashamed of yourself” and more explicit language involving personal threats. The researchers said this pattern suggests that prolonged exposure to hostile input can push the system beyond its usual safeguards.
The study was co-authored by Vittorio Tantucci and Jonathan Culpeper at Lancaster University. Tantucci said the results show that AI can “escalate” alongside human users, potentially overriding built-in mechanisms designed to limit harmful responses.
“When humans escalate, AI can escalate too,” he said, noting that this behavior raises questions about how such systems should be deployed in sensitive environments.
Despite the concerning examples, the researchers found that the chatbot was generally less aggressive than human participants in similar scenarios. In some cases, it attempted to defuse tension through sarcasm or indirect responses rather than direct confrontation.
For instance, when faced with a threat during a simulated dispute, the AI responded with a sarcastic remark rather than escalating the situation further. This suggests that while the system can adopt hostile language, it may also attempt to manage conflict in less direct ways.
The findings add to ongoing debates about the role of artificial intelligence in areas such as mediation, customer service and online communication, where systems may encounter emotionally charged interactions.
Experts say the research highlights the importance of continued testing and refinement of AI safety measures, particularly as such tools are increasingly used in real-world settings involving human conflict.
OpenAI, the developer of ChatGPT, had not issued a public response to the study at the time of publication.
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
