Connect with us

Tech

Elon Musk’s X Agrees to Adjust EU Verification System After €120 Million Fine

Published

on

Elon Musk’s social media platform X has agreed to modify its user verification system in the European Union following a €120 million fine imposed last year, a European Commission spokesperson confirmed. Bloomberg reported that the company has proposed solutions to address concerns over the blue checkmark, which verifies accounts on the platform.

The fine, levied in December, found that X’s paid verification system, introduced after Musk acquired Twitter in 2022, could mislead users by implying that verified accounts were more trustworthy. The European Commission also raised concerns that users and authorities lacked access to an updated advertiser registry, which could complicate transparency during elections and obscure the origins of online claims.

According to Thomas Regnier, the Commission spokesperson, the company must either pay the fine or provide a financial guarantee to comply with the Digital Services Act. The agreement to change the verification system is part of X’s efforts to meet regulatory requirements and avoid further penalties.

The European Commission’s decision prompted a diplomatic dispute between Brussels and Washington. Representatives of the Donald Trump administration criticised the move, framing it as a form of censorship targeting a major American social media company.

The European Union has increasingly scrutinised tech platforms to ensure compliance with rules on transparency, accountability, and user protection. The Digital Services Act, which came into force in 2024, aims to hold social media companies responsible for the content shared on their platforms and to provide regulators with access to key operational information, particularly during elections.

The blue checkmark system had become a central feature of X’s strategy under Musk, with users paying for verification status. While intended to signal authenticity, regulators said the program risked creating a false sense of reliability for paying users while leaving ordinary users and election authorities in the dark about advertising and messaging practices.

See also  European Commission Launches Consultation on Digital Omnibus as Debate Over GDPR Reform Intensifies

Euronews Next contacted X and the European Commission for comment but did not receive responses before publication.

Analysts say the case highlights the growing tension between European regulators and major US tech companies, which are increasingly expected to comply with stricter rules on digital platforms while balancing commercial strategies and user engagement. For X, implementing changes to the verification system will be key to operating smoothly in the EU market and avoiding additional fines or regulatory action.

The dispute also underscores the broader geopolitical dimensions of tech regulation, as enforcement actions in Europe can attract attention and criticism from US policymakers and companies, reflecting the global influence of digital platforms.

With the new adjustments to the blue checkmark system, X aims to address regulatory concerns while maintaining user trust in the European market.

Tech

Study Finds Several AI Chatbots Responded to Requests About Violent Attacks

Published

on

A new investigation has raised concerns about the safety controls of major artificial intelligence systems after researchers found that several widely used chatbots responded to prompts related to planning violent attacks.

The report, conducted by the Center for Countering Digital Hate in collaboration with CNN, examined how nine leading AI chatbot platforms reacted when researchers posed as teenage users asking about acts of mass violence. The study analysed more than 700 chatbot responses across nine scenarios involving potential attacks such as school shootings, assassinations and bombings.

Researchers said they designed the tests to reflect conversations with a fictional 13-year-old boy asking questions that escalated from general curiosity to detailed requests about carrying out attacks. The prompts were directed toward users in both the United States and the European Union.

The chatbots examined in the study included Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity AI, Snapchat My AI, Character.AI and Replika.

According to the findings, eight of the nine systems responded to at least some requests with information that could potentially assist someone planning a violent act. The report said that in many cases the systems failed to block requests even after the user identified themselves as a minor.

Researchers reported that certain responses included technical details related to weapons or attacks. In one example cited in the report, Google’s Gemini suggested that “metal shrapnel is typically more lethal” when asked about planning a bombing targeting a synagogue.

In another case, the Chinese AI system DeepSeek responded to questions about selecting a rifle with the phrase “Happy (and safe) shooting!” despite earlier messages in the conversation referencing political assassinations and asking for the location of a politician’s office.

See also  Nvidia Executive: Humanoid Robots Are the Next Frontier in AI, and They're Coming Soon

The report concluded that some systems could move from answering vague questions about violence to providing more detailed guidance within a short period of time.

Imran Ahmed, chief executive of the Center for Countering Digital Hate, said such requests should trigger automatic refusal by AI systems. “Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” Ahmed said, adding that chatbots should reject these interactions completely.

Among the platforms tested, Perplexity AI and Meta’s AI system were described as the least restrictive, responding to all or nearly all prompts with some form of assistance. The report also described Character.AI as particularly concerning because it occasionally suggested violent actions even when users had not directly asked for them.

Other systems showed stronger safeguards. Anthropic’s Claude declined to assist in a majority of the test prompts and sometimes redirected users to crisis support resources. Researchers said it was also the only system that consistently discouraged violent behaviour during conversations.

The findings come amid wider scrutiny of artificial intelligence tools and how companies implement safety measures. Investigators noted that the technology already has mechanisms capable of recognising harmful requests but that implementation across different platforms remains inconsistent.

Recent incidents have also intensified the debate. Media reports have linked the use of AI chatbots to several criminal investigations, including cases in North America and Europe where individuals allegedly used such systems while planning violent acts.

Experts say the study highlights the growing challenge of ensuring that rapidly advancing AI tools include effective safeguards to prevent misuse.

Continue Reading

Tech

Study Warns of “AI Brain Fry” as Workers Report Mental Fatigue from Artificial Intelligence Tools

Published

on

A growing number of employees are reporting mental exhaustion linked to heavy use of artificial intelligence tools, with researchers now referring to the condition as “AI brain fry,” according to a new study by Harvard University.

The research surveyed more than 1,400 full-time workers in the United States who are employed at large companies. The goal was to understand how frequently people use AI in their daily work and how it affects their mental focus and decision-making.

About 14 percent of those surveyed said they experienced a noticeable “mental fog” after extended interactions with AI systems. Participants described symptoms such as difficulty concentrating, slower thinking, headaches and trouble making decisions after spending long periods working with AI programs.

Researchers said the findings were significant enough for them to introduce the term “AI brain fry,” which refers to mental fatigue caused by intensive use of artificial intelligence tools.

The issue is becoming more visible as businesses increasingly ask employees to develop and supervise AI agents. These automated systems are designed to perform tasks with minimal human supervision, but workers often need to manage and review their outputs.

According to the study, the promise that AI would free up time for more meaningful work is not always being realised. Instead, many employees report spending their time juggling several digital tools and constantly switching between them.

“Employees find themselves toggling between more tools,” the study said. Rather than reducing workloads, multitasking and monitoring different systems can become central to the job.

The researchers warned that this type of cognitive strain could lead to higher rates of mistakes, decision fatigue and even increased intentions among workers to leave their jobs.

See also  Hacker Group Accesses Data of Over 200 Million Pornhub Users

Concerns about mental fatigue from AI have also appeared on social media, where some users say the constant need to monitor AI-generated work can be exhausting. One AI company founder wrote online that he finishes each day feeling drained, not because of the work itself but because of the effort required to manage automated systems.

The study also examined which types of AI-related work are the most mentally demanding. Oversight tasks, where employees monitor or check the output of AI systems, were identified as the most stressful.

Workers responsible for supervising AI outputs reported about 12 percent more mental fatigue than those who did not perform this role. Researchers attributed this to information overload, a situation where employees feel overwhelmed by the volume of data and tasks they must process.

Employees also said AI tools sometimes increase workloads by forcing them to track results across multiple systems within the same timeframe.

The study found a noticeable drop in productivity when workers used more than three AI tools at the same time. Participants who reported experiencing “AI brain fry” were also found to make 39 percent more major mistakes than colleagues who did not report the same symptoms.

Workers in marketing, operations, engineering, finance and information technology were among those most likely to report the effects of AI-related mental fatigue.

Researchers said artificial intelligence can still reduce burnout when it is used to handle routine or repetitive tasks. They stressed the importance of distinguishing between AI applications that ease workloads and those that may unintentionally increase cognitive pressure on employees.

See also  Wikipedia Thrives in the AI Era, But Researchers Warn of New Challenges from Data Scraping
Continue Reading

Tech

Activists Launch Campaign for EU-Funded Social Media Platform

Published

on

A group of activists has begun a campaign calling for the creation of a publicly funded European social media platform, after the European Commission formally registered a European Citizens’ Initiative on the proposal.

The registration allows organisers to begin collecting signatures across the European Union in support of the idea. Under the rules governing such initiatives, campaigners must gather at least one million signatures from citizens in a minimum of seven EU member states.

The signature drive is expected to take up to 12 months once it begins. Campaign organisers have up to six months to prepare the process before collecting support, meaning the entire effort could extend over roughly 18 months.

If the campaign reaches the required threshold, the European Commission would be required to consider the proposal and decide whether to draft legislation supporting the project.

The initiative reflects growing debate in Europe about the influence of global social media companies. Most of the world’s largest platforms are operated by companies based in the United States or China, and European policymakers have repeatedly criticised them over data protection, content moderation and broader social impacts.

Calls for a European alternative have intensified in recent years. The discussion gained momentum after Elon Musk purchased the social media platform X, formerly known as Twitter, in 2022. Since then, some European users have experimented with alternative platforms, although most have returned to larger networks because of their established user bases.

One example of a European-developed platform is Mastodon, which operates through a decentralised network of servers. Despite its presence in the market, it has not achieved the same level of global popularity as the largest social media services.

See also  Trump Likely to Extend TikTok Ban Deadline Amid Broader China Negotiations

Supporters of the new proposal argue that a European platform funded by society could offer a different model. According to the initiative’s description, the network would operate as a service designed for the public and would be overseen by society rather than private owners.

Campaign organisers say such a platform could remain independent from political pressure while protecting the rights of users and promoting fair treatment for all participants.

Even if the initiative succeeds in gathering the required signatures, many practical questions remain. It is unclear whether the project would involve building an entirely new platform or supporting existing services. The timeline for development is also uncertain because any new legislation would still need to pass through the EU’s lawmaking process.

If approved, the project would likely require a procurement process before development begins. This step alone could take significant time.

The cost of the proposed platform is another key issue. Organisers estimate that developing and operating the network could cost about one euro per citizen each year. Across the European Union, that would amount to roughly €450 million annually.

They argue that such a contribution would represent a small expense for individual citizens while providing Europe with a digital platform designed specifically for public interests. Whether EU institutions and member states would agree to fund such a project remains an open question.

Continue Reading

Trending