Connect with us

Tech

Study Warns of “AI Brain Fry” as Workers Report Mental Fatigue from Artificial Intelligence Tools

Published

on

A growing number of employees are reporting mental exhaustion linked to heavy use of artificial intelligence tools, with researchers now referring to the condition as “AI brain fry,” according to a new study by Harvard University.

The research surveyed more than 1,400 full-time workers in the United States who are employed at large companies. The goal was to understand how frequently people use AI in their daily work and how it affects their mental focus and decision-making.

About 14 percent of those surveyed said they experienced a noticeable “mental fog” after extended interactions with AI systems. Participants described symptoms such as difficulty concentrating, slower thinking, headaches and trouble making decisions after spending long periods working with AI programs.

Researchers said the findings were significant enough for them to introduce the term “AI brain fry,” which refers to mental fatigue caused by intensive use of artificial intelligence tools.

The issue is becoming more visible as businesses increasingly ask employees to develop and supervise AI agents. These automated systems are designed to perform tasks with minimal human supervision, but workers often need to manage and review their outputs.

According to the study, the promise that AI would free up time for more meaningful work is not always being realised. Instead, many employees report spending their time juggling several digital tools and constantly switching between them.

“Employees find themselves toggling between more tools,” the study said. Rather than reducing workloads, multitasking and monitoring different systems can become central to the job.

The researchers warned that this type of cognitive strain could lead to higher rates of mistakes, decision fatigue and even increased intentions among workers to leave their jobs.

See also  EU’s Data Union Strategy Seeks to Boost AI and Cross-Border Data Use, but GDPR Stays Untouched

Concerns about mental fatigue from AI have also appeared on social media, where some users say the constant need to monitor AI-generated work can be exhausting. One AI company founder wrote online that he finishes each day feeling drained, not because of the work itself but because of the effort required to manage automated systems.

The study also examined which types of AI-related work are the most mentally demanding. Oversight tasks, where employees monitor or check the output of AI systems, were identified as the most stressful.

Workers responsible for supervising AI outputs reported about 12 percent more mental fatigue than those who did not perform this role. Researchers attributed this to information overload, a situation where employees feel overwhelmed by the volume of data and tasks they must process.

Employees also said AI tools sometimes increase workloads by forcing them to track results across multiple systems within the same timeframe.

The study found a noticeable drop in productivity when workers used more than three AI tools at the same time. Participants who reported experiencing “AI brain fry” were also found to make 39 percent more major mistakes than colleagues who did not report the same symptoms.

Workers in marketing, operations, engineering, finance and information technology were among those most likely to report the effects of AI-related mental fatigue.

Researchers said artificial intelligence can still reduce burnout when it is used to handle routine or repetitive tasks. They stressed the importance of distinguishing between AI applications that ease workloads and those that may unintentionally increase cognitive pressure on employees.

See also  AI Shifts Job Prospects for Young Workers in US — Europe Watches Closely

Tech

Activists Launch Campaign for EU-Funded Social Media Platform

Published

on

A group of activists has begun a campaign calling for the creation of a publicly funded European social media platform, after the European Commission formally registered a European Citizens’ Initiative on the proposal.

The registration allows organisers to begin collecting signatures across the European Union in support of the idea. Under the rules governing such initiatives, campaigners must gather at least one million signatures from citizens in a minimum of seven EU member states.

The signature drive is expected to take up to 12 months once it begins. Campaign organisers have up to six months to prepare the process before collecting support, meaning the entire effort could extend over roughly 18 months.

If the campaign reaches the required threshold, the European Commission would be required to consider the proposal and decide whether to draft legislation supporting the project.

The initiative reflects growing debate in Europe about the influence of global social media companies. Most of the world’s largest platforms are operated by companies based in the United States or China, and European policymakers have repeatedly criticised them over data protection, content moderation and broader social impacts.

Calls for a European alternative have intensified in recent years. The discussion gained momentum after Elon Musk purchased the social media platform X, formerly known as Twitter, in 2022. Since then, some European users have experimented with alternative platforms, although most have returned to larger networks because of their established user bases.

One example of a European-developed platform is Mastodon, which operates through a decentralised network of servers. Despite its presence in the market, it has not achieved the same level of global popularity as the largest social media services.

See also  New York City Sues Tech Giants Over Alleged Role in Youth Mental Health Crisis

Supporters of the new proposal argue that a European platform funded by society could offer a different model. According to the initiative’s description, the network would operate as a service designed for the public and would be overseen by society rather than private owners.

Campaign organisers say such a platform could remain independent from political pressure while protecting the rights of users and promoting fair treatment for all participants.

Even if the initiative succeeds in gathering the required signatures, many practical questions remain. It is unclear whether the project would involve building an entirely new platform or supporting existing services. The timeline for development is also uncertain because any new legislation would still need to pass through the EU’s lawmaking process.

If approved, the project would likely require a procurement process before development begins. This step alone could take significant time.

The cost of the proposed platform is another key issue. Organisers estimate that developing and operating the network could cost about one euro per citizen each year. Across the European Union, that would amount to roughly €450 million annually.

They argue that such a contribution would represent a small expense for individual citizens while providing Europe with a digital platform designed specifically for public interests. Whether EU institutions and member states would agree to fund such a project remains an open question.

Continue Reading

Tech

Workplace Culture Driving Women Out of Tech Jobs in Europe, Report Finds

Published

on

Women are leaving technology jobs in Europe largely because of workplace culture, according to a new report that warns the gender gap in the sector could widen as artificial intelligence reshapes the industry.

The study by consulting firm McKinsey & Company found that women accounted for just 19 percent of employees in core technology roles across Europe in 2025, a decline of three percentage points from the previous year. The drop suggests that long-running efforts to improve gender representation in the industry have failed to deliver meaningful progress.

“Workplace culture is the biggest reason why women are leaving their tech jobs,” the report said, adding that the growing influence of artificial intelligence could deepen the divide if companies fail to address the problem.

“As AI reshapes roles and value creation in tech, existing gender gaps could widen without deliberate action,” the report warned.

The gender imbalance becomes even more visible as careers progress. Women’s participation in the technology workforce falls by as much as 18 percentage points before reaching management levels. As a result, women hold only 13 percent of management positions in tech companies and just 8 percent of executive or corporate leadership roles.

Researchers say the early loss of women from the talent pipeline contributes to the lack of representation in leadership positions.

The report also found that women tend to be concentrated in a limited range of roles that do not typically lead to senior leadership positions. Women represent 39 percent of employees in product management and 54 percent in design roles. However, these positions account for a relatively small share of the overall technology workforce and rarely lead to executive decision-making roles.

See also  Survey Finds Misinformation, Economy and Terrorism Top Global Concerns

In fast-growing fields such as artificial intelligence, data and analytics, men continue to dominate entry-level hiring. The report said this trend is especially concerning as AI expands across the sector.

Researchers warned that the imbalance could result in fewer perspectives shaping technologies that increasingly affect society. The report said this could create a “narrowing of perspectives at precisely the levels at which bias, accountability and societal impact must be addressed.”

The challenges facing women in tech persist even in countries known for strong gender equality policies. In Finland, women represent 36 percent of technology workers, while in Sweden the figure is 23 percent.

The study also examined why many women choose to leave the sector. Nearly half of those surveyed reported experiencing sexism or bias in the workplace over the past year. Around 82 percent said they felt pressure to prove themselves more than male colleagues.

Many respondents said they often felt isolated in their roles because they were the only woman on their team or in meetings.

The report also highlighted what it described as “office housework,” which includes tasks such as organising events or mediating team conflicts. Women are more likely to be asked to perform these duties and spend an average of 200 hours a year on them.

McKinsey said companies could address the gap by improving workplace culture, setting clear representation goals and strengthening mentorship programmes. The report also recommended investing in reskilling programmes to help women move into emerging AI roles as the technology workforce evolves.

Continue Reading

Tech

US Government Designates Anthropic a Supply Chain Risk, Military Contractors Reconsider Use of Claude

Published

on

The Trump administration has officially designated artificial intelligence company Anthropic as a supply chain risk, a move that could force government contractors to stop using its AI chatbot, Claude. The Pentagon said Thursday that it informed Anthropic leadership that the company and its products are now considered a supply chain threat, effective immediately.

The decision follows a standoff over Anthropic’s refusal to remove safety guardrails designed to prevent mass surveillance of Americans and the development of fully autonomous weapons. President Donald Trump and Defence Secretary Pete Hegseth had previously accused the company of endangering national security and threatened a series of penalties.

Anthropic CEO Dario Amodei responded that the designation is legally questionable and said the company plans to challenge it in court. He emphasised that the exceptions Claude enforces are limited to high-level use cases, not operational military decisions, and that prior discussions with the Pentagon had focused on maintaining access to Claude while establishing a smooth transition if required.

The Pentagon argued that restricting access to Claude could endanger warfighters. “The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk,” the department said. Trump has given the military six months to phase out the AI system, which is already embedded across multiple military and national security platforms.

Some defence contractors have already responded. Lockheed Martin said it will follow the Pentagon’s direction and seek other AI providers but does not anticipate major disruptions. Microsoft, whose lawyers studied the scope of the risk designation, said it can continue working with Anthropic on non-defence projects.

See also  Danish Apps Surge as Citizens Seek to Avoid American Products Amid Trump Greenland Remarks

The move has drawn criticism from lawmakers and former officials. Senator Kirsten Gillibrand called the designation “a dangerous misuse of a tool meant to address adversary-controlled technology.” A letter signed by former defence and intelligence leaders, including former CIA director Michael Hayden, argued that applying supply chain rules to a domestic company is a “category error” and sets a troubling precedent. The letter stressed that such rules are meant to protect against foreign adversaries, not American innovators operating under the law.

Despite losing some defence contracts, Anthropic has seen a surge in consumer downloads over the past week, with more than a million people signing up for Claude daily. The app has surpassed OpenAI’s ChatGPT and Google’s Gemini in more than 20 countries’ Apple App Store rankings, reflecting public support for the company’s stance.

The dispute has also intensified Anthropic’s rivalry with OpenAI, whose CEO Sam Altman acknowledged that a recent military deal for ChatGPT in classified environments was rushed and required adjustments. Amodei expressed regret over an internal note he sent criticizing OpenAI and the Pentagon’s decision, apologizing for language that suggested the company was punished for not offering “dictator-like praise” to Trump.

The Pentagon’s designation of Anthropic as a supply chain risk marks an unprecedented escalation in the government’s effort to assert control over AI technologies used in national security, highlighting tensions between innovation, ethics, and military priorities.

Continue Reading

Trending