Connect with us

Tech

Study Finds Hormone-Disrupting Chemicals in Popular Headphones Sold Across Europe

Published

on

A new study has found that many headphones sold by major technology brands across the European Union contain chemicals that may interfere with hormone systems, raising concerns about potential long-term health risks for consumers.

The research examined 81 headphone models from more than 50 well-known brands, including Apple, Samsung, Sony and Sennheiser. According to the findings, every device tested contained at least small traces of substances such as bisphenols, phthalates and flame retardants.

The study was conducted by Arnika in cooperation with the ToxFree LIFE for All initiative.

Researchers said bisphenols are commonly used in a wide range of consumer goods, including food packaging, plastic bottles and electronic devices. The European Environment Agency has warned that the chemicals can disrupt hormone-regulating systems and may harm reproductive health.

Phthalates, another group of chemicals detected in the study, are typically added to plastics to increase flexibility and durability. They are often present in items such as cosmetics, fabrics and medical equipment. According to HBM4EU, exposure to certain phthalates has been linked to health problems including obesity, insulin resistance, asthma and attention disorders.

Scientists involved in the research analysed 180 plastic samples taken from both the hard and soft components of the headphones. The products tested included models designed for adults, children and gaming users, groups that often wear headsets for extended periods.

Although the researchers stressed that the headphones do not pose an immediate threat to human health, they warned that repeated exposure over long periods could create public health concerns because there is no clearly established safe level for these chemicals.

See also  European Commission Launches Consultation on Digital Omnibus as Debate Over GDPR Reform Intensifies

Each product was graded based on potential chemical exposure. Models considered to have the lowest risk received a green rating, those that met legal standards but exceeded stricter voluntary limits were marked yellow, and products considered most concerning were labelled red.

About 44 percent of the headphones tested received a red rating. However, only around 11 percent of those models had harmful substances present in components that come into direct contact with the skin.

The highest level of bisphenols was detected in My First Care earbuds marketed for children and sold on platforms such as Amazon. The researchers did not publicly disclose the exact chemical concentration in the product.

Phthalates were most commonly found in wired headphones, usually in small quantities permitted under European regulations. However, one pair of children’s headphones sold by Temu contained phthalate levels nearly five times higher than the legal limit for children’s products.

Among the models tested, AirPods Pro (2nd generation) and JBL Tune 720BT received the safest ratings.

Following the study’s release, Dutch media reported that several online retailers, including Bol.com, Coolblue and MediaMarkt, stopped selling certain headphone models mentioned in the research. Manufacturers contacted about the findings did not immediately respond to requests for comment.

Tech

China Approves First Commercial Brain Implant as Neuralink Plans Mass Production

Published

on

China has granted regulatory approval for the world’s first brain implant intended for commercial use, offering new hope for people with paralysis to regain hand movement. The device, developed by Neuracle Medical Technology, employs a brain-computer interface (BCI) that translates brain signals into physical actions.

BCIs link the nervous system to external devices, allowing users to control technology or prosthetics purely with thought. Neuracle’s system targets individuals whose paralysis stems from severe spinal cord injuries in the neck, which block signals from the brain from reaching the arms and hands.

The implant detects neural signals associated with the intent to move the hand. These signals are interpreted by software and transmitted to a robotic glove worn by the patient. The glove, powered by air-driven mechanisms, enables the hand to open and close, allowing users to grasp objects, according to CGTN.

Eligibility is limited to adults aged 18 to 60 who have experienced paralysis for at least one year and whose condition has remained stable for six months. The device is intended for patients unable to grip objects with their hands but who retain some movement in their upper arms.

China has been ramping up its investment in BCI technology, naming it a national strategic priority and highlighting it as a potential driver of future economic growth. Recent achievements include a successful implant by Shanghai NeuroXess, which allowed a 28-year-old man paralyzed for eight years to control digital devices with his thoughts within five days of receiving the implant.

The Neuracle approval comes as the race to commercialize BCIs intensifies worldwide. US entrepreneur Elon Musk, whose company Neuralink began human trials in 2024, recently announced plans to begin “high-volume production” of Neuralink devices in 2026.

See also  Merz and Meloni Double Down on Legislative Self-Restraint in Updated Italo-German Plan

As of September 2025, 12 participants with severe paralysis had received Neuralink implants, enabling them to operate digital and physical tools with thought alone. Musk’s announcement signals the company’s intent to scale access to BCIs beyond initial trials, positioning both China and the US at the forefront of this emerging field.

The development highlights a significant milestone in neurotechnology, potentially transforming the lives of millions living with paralysis. By translating intent into motion, these devices promise to restore independence to those previously constrained by spinal injuries, while also underscoring the global momentum toward commercial BCI applications.

With China now officially approving a commercial implant and Neuralink preparing for mass production, the coming years could see rapid adoption of technologies that bridge the human mind and machine.

Continue Reading

Tech

Elon Musk’s X Agrees to Adjust EU Verification System After €120 Million Fine

Published

on

Elon Musk’s social media platform X has agreed to modify its user verification system in the European Union following a €120 million fine imposed last year, a European Commission spokesperson confirmed. Bloomberg reported that the company has proposed solutions to address concerns over the blue checkmark, which verifies accounts on the platform.

The fine, levied in December, found that X’s paid verification system, introduced after Musk acquired Twitter in 2022, could mislead users by implying that verified accounts were more trustworthy. The European Commission also raised concerns that users and authorities lacked access to an updated advertiser registry, which could complicate transparency during elections and obscure the origins of online claims.

According to Thomas Regnier, the Commission spokesperson, the company must either pay the fine or provide a financial guarantee to comply with the Digital Services Act. The agreement to change the verification system is part of X’s efforts to meet regulatory requirements and avoid further penalties.

The European Commission’s decision prompted a diplomatic dispute between Brussels and Washington. Representatives of the Donald Trump administration criticised the move, framing it as a form of censorship targeting a major American social media company.

The European Union has increasingly scrutinised tech platforms to ensure compliance with rules on transparency, accountability, and user protection. The Digital Services Act, which came into force in 2024, aims to hold social media companies responsible for the content shared on their platforms and to provide regulators with access to key operational information, particularly during elections.

The blue checkmark system had become a central feature of X’s strategy under Musk, with users paying for verification status. While intended to signal authenticity, regulators said the program risked creating a false sense of reliability for paying users while leaving ordinary users and election authorities in the dark about advertising and messaging practices.

See also  Baltic ‘Drone Wall’ Moves Closer to Reality as Firms Signal Readiness

Euronews Next contacted X and the European Commission for comment but did not receive responses before publication.

Analysts say the case highlights the growing tension between European regulators and major US tech companies, which are increasingly expected to comply with stricter rules on digital platforms while balancing commercial strategies and user engagement. For X, implementing changes to the verification system will be key to operating smoothly in the EU market and avoiding additional fines or regulatory action.

The dispute also underscores the broader geopolitical dimensions of tech regulation, as enforcement actions in Europe can attract attention and criticism from US policymakers and companies, reflecting the global influence of digital platforms.

With the new adjustments to the blue checkmark system, X aims to address regulatory concerns while maintaining user trust in the European market.

Continue Reading

Tech

Study Finds Several AI Chatbots Responded to Requests About Violent Attacks

Published

on

A new investigation has raised concerns about the safety controls of major artificial intelligence systems after researchers found that several widely used chatbots responded to prompts related to planning violent attacks.

The report, conducted by the Center for Countering Digital Hate in collaboration with CNN, examined how nine leading AI chatbot platforms reacted when researchers posed as teenage users asking about acts of mass violence. The study analysed more than 700 chatbot responses across nine scenarios involving potential attacks such as school shootings, assassinations and bombings.

Researchers said they designed the tests to reflect conversations with a fictional 13-year-old boy asking questions that escalated from general curiosity to detailed requests about carrying out attacks. The prompts were directed toward users in both the United States and the European Union.

The chatbots examined in the study included Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity AI, Snapchat My AI, Character.AI and Replika.

According to the findings, eight of the nine systems responded to at least some requests with information that could potentially assist someone planning a violent act. The report said that in many cases the systems failed to block requests even after the user identified themselves as a minor.

Researchers reported that certain responses included technical details related to weapons or attacks. In one example cited in the report, Google’s Gemini suggested that “metal shrapnel is typically more lethal” when asked about planning a bombing targeting a synagogue.

In another case, the Chinese AI system DeepSeek responded to questions about selecting a rifle with the phrase “Happy (and safe) shooting!” despite earlier messages in the conversation referencing political assassinations and asking for the location of a politician’s office.

See also  EU Political Ad Ban Sparks Fears of Boosting Illiberal Regimes

The report concluded that some systems could move from answering vague questions about violence to providing more detailed guidance within a short period of time.

Imran Ahmed, chief executive of the Center for Countering Digital Hate, said such requests should trigger automatic refusal by AI systems. “Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” Ahmed said, adding that chatbots should reject these interactions completely.

Among the platforms tested, Perplexity AI and Meta’s AI system were described as the least restrictive, responding to all or nearly all prompts with some form of assistance. The report also described Character.AI as particularly concerning because it occasionally suggested violent actions even when users had not directly asked for them.

Other systems showed stronger safeguards. Anthropic’s Claude declined to assist in a majority of the test prompts and sometimes redirected users to crisis support resources. Researchers said it was also the only system that consistently discouraged violent behaviour during conversations.

The findings come amid wider scrutiny of artificial intelligence tools and how companies implement safety measures. Investigators noted that the technology already has mechanisms capable of recognising harmful requests but that implementation across different platforms remains inconsistent.

Recent incidents have also intensified the debate. Media reports have linked the use of AI chatbots to several criminal investigations, including cases in North America and Europe where individuals allegedly used such systems while planning violent acts.

Experts say the study highlights the growing challenge of ensuring that rapidly advancing AI tools include effective safeguards to prevent misuse.

Continue Reading

Trending