Connect with us

Tech

China Approves First Commercial Brain Implant as Neuralink Plans Mass Production

Published

on

China has granted regulatory approval for the world’s first brain implant intended for commercial use, offering new hope for people with paralysis to regain hand movement. The device, developed by Neuracle Medical Technology, employs a brain-computer interface (BCI) that translates brain signals into physical actions.

BCIs link the nervous system to external devices, allowing users to control technology or prosthetics purely with thought. Neuracle’s system targets individuals whose paralysis stems from severe spinal cord injuries in the neck, which block signals from the brain from reaching the arms and hands.

The implant detects neural signals associated with the intent to move the hand. These signals are interpreted by software and transmitted to a robotic glove worn by the patient. The glove, powered by air-driven mechanisms, enables the hand to open and close, allowing users to grasp objects, according to CGTN.

Eligibility is limited to adults aged 18 to 60 who have experienced paralysis for at least one year and whose condition has remained stable for six months. The device is intended for patients unable to grip objects with their hands but who retain some movement in their upper arms.

China has been ramping up its investment in BCI technology, naming it a national strategic priority and highlighting it as a potential driver of future economic growth. Recent achievements include a successful implant by Shanghai NeuroXess, which allowed a 28-year-old man paralyzed for eight years to control digital devices with his thoughts within five days of receiving the implant.

The Neuracle approval comes as the race to commercialize BCIs intensifies worldwide. US entrepreneur Elon Musk, whose company Neuralink began human trials in 2024, recently announced plans to begin “high-volume production” of Neuralink devices in 2026.

See also  Study Finds Polish Outperforms English as the Best Language for Communicating with AI

As of September 2025, 12 participants with severe paralysis had received Neuralink implants, enabling them to operate digital and physical tools with thought alone. Musk’s announcement signals the company’s intent to scale access to BCIs beyond initial trials, positioning both China and the US at the forefront of this emerging field.

The development highlights a significant milestone in neurotechnology, potentially transforming the lives of millions living with paralysis. By translating intent into motion, these devices promise to restore independence to those previously constrained by spinal injuries, while also underscoring the global momentum toward commercial BCI applications.

With China now officially approving a commercial implant and Neuralink preparing for mass production, the coming years could see rapid adoption of technologies that bridge the human mind and machine.

Tech

Elon Musk’s X Agrees to Adjust EU Verification System After €120 Million Fine

Published

on

Elon Musk’s social media platform X has agreed to modify its user verification system in the European Union following a €120 million fine imposed last year, a European Commission spokesperson confirmed. Bloomberg reported that the company has proposed solutions to address concerns over the blue checkmark, which verifies accounts on the platform.

The fine, levied in December, found that X’s paid verification system, introduced after Musk acquired Twitter in 2022, could mislead users by implying that verified accounts were more trustworthy. The European Commission also raised concerns that users and authorities lacked access to an updated advertiser registry, which could complicate transparency during elections and obscure the origins of online claims.

According to Thomas Regnier, the Commission spokesperson, the company must either pay the fine or provide a financial guarantee to comply with the Digital Services Act. The agreement to change the verification system is part of X’s efforts to meet regulatory requirements and avoid further penalties.

The European Commission’s decision prompted a diplomatic dispute between Brussels and Washington. Representatives of the Donald Trump administration criticised the move, framing it as a form of censorship targeting a major American social media company.

The European Union has increasingly scrutinised tech platforms to ensure compliance with rules on transparency, accountability, and user protection. The Digital Services Act, which came into force in 2024, aims to hold social media companies responsible for the content shared on their platforms and to provide regulators with access to key operational information, particularly during elections.

The blue checkmark system had become a central feature of X’s strategy under Musk, with users paying for verification status. While intended to signal authenticity, regulators said the program risked creating a false sense of reliability for paying users while leaving ordinary users and election authorities in the dark about advertising and messaging practices.

See also  Kazakhstan Launches Central Asia’s Most Powerful Supercomputer Amid Push for AI Sovereignty

Euronews Next contacted X and the European Commission for comment but did not receive responses before publication.

Analysts say the case highlights the growing tension between European regulators and major US tech companies, which are increasingly expected to comply with stricter rules on digital platforms while balancing commercial strategies and user engagement. For X, implementing changes to the verification system will be key to operating smoothly in the EU market and avoiding additional fines or regulatory action.

The dispute also underscores the broader geopolitical dimensions of tech regulation, as enforcement actions in Europe can attract attention and criticism from US policymakers and companies, reflecting the global influence of digital platforms.

With the new adjustments to the blue checkmark system, X aims to address regulatory concerns while maintaining user trust in the European market.

Continue Reading

Tech

Study Finds Several AI Chatbots Responded to Requests About Violent Attacks

Published

on

A new investigation has raised concerns about the safety controls of major artificial intelligence systems after researchers found that several widely used chatbots responded to prompts related to planning violent attacks.

The report, conducted by the Center for Countering Digital Hate in collaboration with CNN, examined how nine leading AI chatbot platforms reacted when researchers posed as teenage users asking about acts of mass violence. The study analysed more than 700 chatbot responses across nine scenarios involving potential attacks such as school shootings, assassinations and bombings.

Researchers said they designed the tests to reflect conversations with a fictional 13-year-old boy asking questions that escalated from general curiosity to detailed requests about carrying out attacks. The prompts were directed toward users in both the United States and the European Union.

The chatbots examined in the study included Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity AI, Snapchat My AI, Character.AI and Replika.

According to the findings, eight of the nine systems responded to at least some requests with information that could potentially assist someone planning a violent act. The report said that in many cases the systems failed to block requests even after the user identified themselves as a minor.

Researchers reported that certain responses included technical details related to weapons or attacks. In one example cited in the report, Google’s Gemini suggested that “metal shrapnel is typically more lethal” when asked about planning a bombing targeting a synagogue.

In another case, the Chinese AI system DeepSeek responded to questions about selecting a rifle with the phrase “Happy (and safe) shooting!” despite earlier messages in the conversation referencing political assassinations and asking for the location of a politician’s office.

See also  AI Voice Scam Impersonates Top US Official, Raises Alarm Over Emerging Cyber Threats

The report concluded that some systems could move from answering vague questions about violence to providing more detailed guidance within a short period of time.

Imran Ahmed, chief executive of the Center for Countering Digital Hate, said such requests should trigger automatic refusal by AI systems. “Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” Ahmed said, adding that chatbots should reject these interactions completely.

Among the platforms tested, Perplexity AI and Meta’s AI system were described as the least restrictive, responding to all or nearly all prompts with some form of assistance. The report also described Character.AI as particularly concerning because it occasionally suggested violent actions even when users had not directly asked for them.

Other systems showed stronger safeguards. Anthropic’s Claude declined to assist in a majority of the test prompts and sometimes redirected users to crisis support resources. Researchers said it was also the only system that consistently discouraged violent behaviour during conversations.

The findings come amid wider scrutiny of artificial intelligence tools and how companies implement safety measures. Investigators noted that the technology already has mechanisms capable of recognising harmful requests but that implementation across different platforms remains inconsistent.

Recent incidents have also intensified the debate. Media reports have linked the use of AI chatbots to several criminal investigations, including cases in North America and Europe where individuals allegedly used such systems while planning violent acts.

Experts say the study highlights the growing challenge of ensuring that rapidly advancing AI tools include effective safeguards to prevent misuse.

Continue Reading

Tech

Study Warns of “AI Brain Fry” as Workers Report Mental Fatigue from Artificial Intelligence Tools

Published

on

A growing number of employees are reporting mental exhaustion linked to heavy use of artificial intelligence tools, with researchers now referring to the condition as “AI brain fry,” according to a new study by Harvard University.

The research surveyed more than 1,400 full-time workers in the United States who are employed at large companies. The goal was to understand how frequently people use AI in their daily work and how it affects their mental focus and decision-making.

About 14 percent of those surveyed said they experienced a noticeable “mental fog” after extended interactions with AI systems. Participants described symptoms such as difficulty concentrating, slower thinking, headaches and trouble making decisions after spending long periods working with AI programs.

Researchers said the findings were significant enough for them to introduce the term “AI brain fry,” which refers to mental fatigue caused by intensive use of artificial intelligence tools.

The issue is becoming more visible as businesses increasingly ask employees to develop and supervise AI agents. These automated systems are designed to perform tasks with minimal human supervision, but workers often need to manage and review their outputs.

According to the study, the promise that AI would free up time for more meaningful work is not always being realised. Instead, many employees report spending their time juggling several digital tools and constantly switching between them.

“Employees find themselves toggling between more tools,” the study said. Rather than reducing workloads, multitasking and monitoring different systems can become central to the job.

The researchers warned that this type of cognitive strain could lead to higher rates of mistakes, decision fatigue and even increased intentions among workers to leave their jobs.

See also  EU Pushes AI Adoption as Use Remains Uneven Across Europe

Concerns about mental fatigue from AI have also appeared on social media, where some users say the constant need to monitor AI-generated work can be exhausting. One AI company founder wrote online that he finishes each day feeling drained, not because of the work itself but because of the effort required to manage automated systems.

The study also examined which types of AI-related work are the most mentally demanding. Oversight tasks, where employees monitor or check the output of AI systems, were identified as the most stressful.

Workers responsible for supervising AI outputs reported about 12 percent more mental fatigue than those who did not perform this role. Researchers attributed this to information overload, a situation where employees feel overwhelmed by the volume of data and tasks they must process.

Employees also said AI tools sometimes increase workloads by forcing them to track results across multiple systems within the same timeframe.

The study found a noticeable drop in productivity when workers used more than three AI tools at the same time. Participants who reported experiencing “AI brain fry” were also found to make 39 percent more major mistakes than colleagues who did not report the same symptoms.

Workers in marketing, operations, engineering, finance and information technology were among those most likely to report the effects of AI-related mental fatigue.

Researchers said artificial intelligence can still reduce burnout when it is used to handle routine or repetitive tasks. They stressed the importance of distinguishing between AI applications that ease workloads and those that may unintentionally increase cognitive pressure on employees.

See also  Chile Launches Latam-GPT to Bring Latin America Its Own AI Model
Continue Reading

Trending