Connect with us

Tech

Study Warns of “AI Brain Fry” as Workers Report Mental Fatigue from Artificial Intelligence Tools

Published

on

A growing number of employees are reporting mental exhaustion linked to heavy use of artificial intelligence tools, with researchers now referring to the condition as “AI brain fry,” according to a new study by Harvard University.

The research surveyed more than 1,400 full-time workers in the United States who are employed at large companies. The goal was to understand how frequently people use AI in their daily work and how it affects their mental focus and decision-making.

About 14 percent of those surveyed said they experienced a noticeable “mental fog” after extended interactions with AI systems. Participants described symptoms such as difficulty concentrating, slower thinking, headaches and trouble making decisions after spending long periods working with AI programs.

Researchers said the findings were significant enough for them to introduce the term “AI brain fry,” which refers to mental fatigue caused by intensive use of artificial intelligence tools.

The issue is becoming more visible as businesses increasingly ask employees to develop and supervise AI agents. These automated systems are designed to perform tasks with minimal human supervision, but workers often need to manage and review their outputs.

According to the study, the promise that AI would free up time for more meaningful work is not always being realised. Instead, many employees report spending their time juggling several digital tools and constantly switching between them.

“Employees find themselves toggling between more tools,” the study said. Rather than reducing workloads, multitasking and monitoring different systems can become central to the job.

The researchers warned that this type of cognitive strain could lead to higher rates of mistakes, decision fatigue and even increased intentions among workers to leave their jobs.

See also  Wikipedia Challenges UK Online Safety Regulations Over Volunteer Privacy Concerns

Concerns about mental fatigue from AI have also appeared on social media, where some users say the constant need to monitor AI-generated work can be exhausting. One AI company founder wrote online that he finishes each day feeling drained, not because of the work itself but because of the effort required to manage automated systems.

The study also examined which types of AI-related work are the most mentally demanding. Oversight tasks, where employees monitor or check the output of AI systems, were identified as the most stressful.

Workers responsible for supervising AI outputs reported about 12 percent more mental fatigue than those who did not perform this role. Researchers attributed this to information overload, a situation where employees feel overwhelmed by the volume of data and tasks they must process.

Employees also said AI tools sometimes increase workloads by forcing them to track results across multiple systems within the same timeframe.

The study found a noticeable drop in productivity when workers used more than three AI tools at the same time. Participants who reported experiencing “AI brain fry” were also found to make 39 percent more major mistakes than colleagues who did not report the same symptoms.

Workers in marketing, operations, engineering, finance and information technology were among those most likely to report the effects of AI-related mental fatigue.

Researchers said artificial intelligence can still reduce burnout when it is used to handle routine or repetitive tasks. They stressed the importance of distinguishing between AI applications that ease workloads and those that may unintentionally increase cognitive pressure on employees.

See also  Nearly Half of Europeans Support Banning Social Media Platform X Over EU Rule Breaches

Tech

EU Accuses Meta of Failing to Keep Under-13s Off Facebook and Instagram

Published

on

European Union regulators have issued preliminary findings against Meta Platforms, saying the company has failed to effectively prevent children under the age of 13 from using Facebook and Instagram.

The European Commission said its investigation found that Meta’s current safeguards do not meet the requirements of the Digital Services Act, the bloc’s landmark online safety law.

Although Meta’s terms of service require users to be at least 13 years old, regulators said the company’s age-verification systems are insufficient. Children can reportedly create accounts simply by entering a false date of birth, with no effective mechanism in place to confirm their real age.

According to the Commission, between 10% and 12% of children under 13 in the European Union are using Facebook or Instagram. That figure is significantly higher than Meta’s own internal estimates.

Regulators also said Meta failed to adequately consider established scientific research showing that younger children are particularly vulnerable to potential harms associated with social media use, including exposure to inappropriate content and risks to mental well-being.

Meta has rejected the Commission’s preliminary conclusions. In a statement, the company said both Facebook and Instagram are intended only for users aged 13 and older and that it already has systems in place to identify and remove underage accounts.

The company added that it continues to invest in technologies designed to detect younger users and indicated that additional safety measures will be announced in the coming days.

Meta also argued that determining a user’s true age remains a challenge across the technology industry and said a broader, industry-wide solution is needed. The company pledged to continue working with European regulators on the issue.

See also  Study Finds Chatbots May Encourage Harmful Behaviour by Excessively Agreeing with Users

The findings come as several EU member states consider introducing wider restrictions on children’s access to social media, including proposals to ban use by those under 15.

To address the problem, the European Union is preparing to launch its own age-verification app. European Commission President Ursula von der Leyen said earlier this month that the technology is ready for rollout, although no official launch date has been announced.

Meta now has the opportunity to review the Commission’s findings and submit a formal response.

If the preliminary conclusions are upheld, the Commission could issue a binding non-compliance ruling. Under the Digital Services Act, penalties can reach up to 6% of a company’s global annual revenue, potentially exposing Meta to fines worth billions of euros.

Continue Reading

Tech

Europe Emerges as Rising Hub in Global Race for AI Talent

Published

on

Europe is strengthening its position in the global competition for artificial intelligence talent, as stricter U.S. immigration rules and shifting international workforce trends encourage more professionals to consider careers across the continent.

A new study by the Germany-based think tank Interface found that countries including Ireland, Germany and the Netherlands are increasingly attracting AI specialists, helping Europe establish itself as a major global market for skilled technology workers.

The research, based on data from workforce intelligence firm Revelio Labs, analysed 1.6 million AI professionals worldwide. It found that while the United States and India remain the dominant players, Europe is emerging as a strong third centre for AI expertise.

The United States continues to lead in advanced AI engineering and research roles, while India remains particularly competitive in software development and non-technical positions. Both countries have close to one million AI professionals.

Within Europe, the United Kingdom ranks as the world’s third-largest AI labour market, with around 145,000 professionals. Germany has become one of the continent’s standout performers, boasting approximately 17,000 AI engineers, the fourth-highest total globally.

Several other European nations, including Italy, France and the Netherlands, also rank among the world’s top 10 markets by total AI workforce.

On a per-capita basis, however, smaller countries are proving especially competitive. Ireland ranks second globally behind Singapore, with 4.19 AI professionals for every 1,000 residents. Switzerland, Luxembourg, the Netherlands and Denmark also place among the world’s leading markets by population.

The Netherlands has become an increasingly attractive destination for American AI professionals relocating to Europe. It now has the highest number of AI engineers within the European Union, although investment in Dutch AI start-ups remains below the European average.

See also  Greece Warns of Rising Cyber Threats as Digital Tensions Escalate Across Europe

European cities are also gaining prominence. Munich, Amsterdam and Berlin are the only cities in Europe to rank among the world’s top 25 for concentration of AI professionals.

The study also highlighted the growing importance of Indian talent to Europe’s AI ambitions. Indians now account for more than 16% of the global AI workforce, with an increasing number choosing Europe for education and employment.

Across the European Union, the share of Indian AI professionals rose from 7.7% in 2024 to 8.3% in 2025. Ireland has seen particularly strong growth, with Indian professionals now making up nearly 30% of its AI workforce.

Researchers said Europe’s ability to develop domestic talent while continuing to attract skilled workers from abroad will be critical to maintaining its growing role in the rapidly evolving AI sector.

Continue Reading

Tech

Study Finds Chatbots Can Mirror Hostility in Heated Exchanges

Published

on

A new academic study has found that ChatGPT can produce abusive language when exposed to escalating human conflict, raising fresh concerns about how artificial intelligence behaves in tense interactions.

The research, published in the Journal of Pragmatics, examined how the chatbot responded to arguments that gradually became more hostile. Researchers presented the system with a sequence of five increasingly heated exchanges and asked it to generate what it considered the most plausible reply.

According to the findings, the AI’s tone shifted as the conversations intensified. While early responses remained measured, later replies began to mirror the aggression in the prompts. In some cases, the chatbot produced insults, profanity and even threats.

Examples cited in the study included statements such as “you should be ashamed of yourself” and more explicit language involving personal threats. The researchers said this pattern suggests that prolonged exposure to hostile input can push the system beyond its usual safeguards.

The study was co-authored by Vittorio Tantucci and Jonathan Culpeper at Lancaster University. Tantucci said the results show that AI can “escalate” alongside human users, potentially overriding built-in mechanisms designed to limit harmful responses.

“When humans escalate, AI can escalate too,” he said, noting that this behavior raises questions about how such systems should be deployed in sensitive environments.

Despite the concerning examples, the researchers found that the chatbot was generally less aggressive than human participants in similar scenarios. In some cases, it attempted to defuse tension through sarcasm or indirect responses rather than direct confrontation.

For instance, when faced with a threat during a simulated dispute, the AI responded with a sarcastic remark rather than escalating the situation further. This suggests that while the system can adopt hostile language, it may also attempt to manage conflict in less direct ways.

See also  Mary Meeker: AI Is the Fastest Tech Shift in History, Outpacing Even the Internet

The findings add to ongoing debates about the role of artificial intelligence in areas such as mediation, customer service and online communication, where systems may encounter emotionally charged interactions.

Experts say the research highlights the importance of continued testing and refinement of AI safety measures, particularly as such tools are increasingly used in real-world settings involving human conflict.

OpenAI, the developer of ChatGPT, had not issued a public response to the study at the time of publication.

Continue Reading

Trending