Tech
Study Finds Most People Can No Longer Tell AI-Generated Voices from Real Ones
A new study has found that most people can no longer distinguish between human voices and their artificial intelligence (AI)-generated counterparts, raising growing concerns about misinformation, fraud, and the ethical use of voice-cloning technologies.
The research, published in the journal PLoS One by scientists from Queen Mary University of London, revealed that participants were able to correctly identify genuine human voices only slightly more often than they could identify cloned AI voices. Out of 80 voice samples—half human and half AI-generated—participants mistook 58 percent of cloned voices for real, while 62 percent of actual human voices were correctly identified.
“The most important aspect of the research is that AI-generated voices, specifically voice clones, sound as human as recordings of real human voices,” said Dr. Nadine Lavan, lead author of the study and senior lecturer in psychology at Queen Mary University. She added that these realistic voices were created using commercially available tools, meaning anyone can produce convincing replicas without advanced technical skills or large budgets.
AI voice cloning works by analyzing vocal data to capture and reproduce unique characteristics such as tone, pitch, and rhythm. This precise imitation has made the technology increasingly popular among scammers, who use cloned voices to impersonate loved ones or public figures. According to research by the University of Portsmouth, nearly two-thirds of people over 75 have received attempted phone scams, with about 60 percent of those attempts made through voice calls.
The spread of AI-generated “deepfake” audio has also been used to mimic politicians, journalists, and celebrities, raising fears about its potential to manipulate public opinion and spread false information.
Dr. Lavan urged developers to adopt stronger ethical safeguards and work closely with policymakers. “Companies creating the technology should consult ethicists and lawmakers to address issues around voice ownership, consent, and the legal implications of cloning,” she said.
Despite its risks, researchers say the technology also has significant potential for positive impact. AI-generated voices can help restore speech to people who have lost their ability to speak or allow users to design custom voices that reflect their identity.
“This technology could transform accessibility in education, media, and communication,” Lavan noted. She highlighted examples such as AI-assisted audio learning, which has been shown to improve reading engagement among students with neurodiverse conditions like ADHD.
Lavan and her team plan to continue studying how people interact with AI-generated voices, exploring whether knowing a voice is artificial affects trust, engagement, or emotional response.
“As AI voices become part of our daily lives, understanding how we relate to them will be crucial,” she said.
Tech
European Journalist Suspended for Using AI-Generated Fake Quotes
Journalist Peter Vandermeersch, who worked with Dutch publisher Mediahuis, reportedly fabricated expert quotes into 15 of 53 articles written for them. Vandermeersch, a senior European journalist, has been temporarily suspended after an investigation revealed he published quotes generated by artificial intelligence (AI) as if they were genuine.
The Dutch newspaper NRC reported that Vandermeersch inserted “dozens” of fabricated quotes into articles published on two Mediahuis websites. Some of the statements attributed to experts could not be found in the sources Vandermeersch cited, including news articles and scientific studies. Seven of the individuals whose quotes were used confirmed they had never made the statements attributed to them.
Vandermeersch served as chief executive of Mediahuis Ireland from 2022 to 2025 before taking on a fellowship role in journalism and society at Mediahuis. He confirmed his temporary suspension on his blog, saying he relied on AI tools including ChatGPT, Perplexity, and Google’s Notebook to summarise lengthy reports, trusting the outputs to be accurate.
Instead, the systems generated fabricated quotes that “put words into people’s mouths,” Vandermeersch admitted. “That was not just careless, it was wrong,” he wrote. “It is particularly painful that I made precisely the mistake I have repeatedly warned colleagues about: these language models are so good that they produce irresistible quotes you are tempted to use as an author.”
Vandermeersch said he first discovered the issue last year, when two of his articles were found to contain AI-generated quotes. He did not correct the errors at the time, which allowed the problem to persist. “When I realised this a few months ago, my enthusiasm diminished, as did my use of AI,” he said.
He explained that he continues to use AI for tasks such as translation, generating ideas, creating headlines, and developing story angles, but with “far less naive trust than before.” Mediahuis has yet to announce any further disciplinary measures or whether it will retract the affected articles.
The case has raised fresh concerns about the use of AI in journalism, highlighting the risks of relying on automated systems to generate content without verification. Industry experts warn that while AI tools can be valuable for research and drafting, uncritical use can lead to serious ethical breaches, including the misrepresentation of sources.
Mediahuis said it takes the matter seriously and is reviewing editorial procedures to prevent similar incidents in the future. The scandal has sparked a wider discussion in European media about the ethical boundaries of AI in reporting, particularly when it comes to quoting real people.
The incident underscores the growing tension between technological convenience and journalistic integrity, as newsrooms across Europe experiment with AI tools while balancing accuracy and accountability.
Tech
Cyberattacks Intensify as Iran Conflict Spills Into Digital Domain
State-linked and hacktivist groups have claimed a series of cyberattacks against the United States and Israel since the war with Iran began, marking a significant escalation in the digital dimension of the conflict.
One of the most notable incidents involved Stryker, which confirmed on March 11 that a cyberattack had disrupted its global network. According to reports, employees encountered the logo of Handala, an إيران-linked hacking group, on login pages across the company’s systems. The breach reportedly targeted the firm’s Microsoft-based infrastructure, though the full extent of the disruption remains unclear.
Handala has claimed responsibility for the attack, stating it exploited cloud management systems to remotely wipe large numbers of devices worldwide. The group said the operation was carried out in retaliation for a missile strike in Iran. Independent verification of these claims is still pending.
Cybersecurity analysts say the attack is part of a broader campaign by groups linked to Iran’s security apparatus. According to findings from CloudSek, organisations associated with the Islamic Revolutionary Guard Corps have targeted US critical infrastructure. These include CyberAv3ngers, APT33 and APT55, which are accused of attempting to infiltrate industrial systems such as power grids and water facilities.
Experts say some of these groups use simple methods, including default passwords, to access systems, while others deploy malware aimed at disrupting operations or gathering intelligence. Additional networks linked to Iran’s Ministry of Intelligence have also been active, targeting telecommunications, energy companies and government organisations.
At the same time, the United States and Israel are conducting their own cyber operations. General Dan Caine said US Cyber Command played a key role early in the conflict, disrupting Iranian communications and sensor networks. Defence Secretary Pete Hegseth confirmed that artificial intelligence and cyber tools are being used alongside conventional military operations.
Israeli intelligence has also reportedly relied on hacked data to support military planning, highlighting the growing role of cyber capabilities in modern warfare.
Hacktivist activity has surged as well. More than 60 groups formed a loose coalition known as the Cyber Islamic Resistance, coordinating attacks through online platforms. These groups have claimed hundreds of operations, including attempts to disrupt Israeli infrastructure and private sector systems. Analysts warn that such actors are often less restrained and may pose risks to civilian networks.
The conflict has also drawn in groups from outside the region, including actors based in Iraq, Russia and other parts of the Middle East. Some have targeted government websites and transport infrastructure, while pro-Israeli groups have carried out retaliatory attacks against Iranian entities.
Security experts say the growing scale and coordination of cyber operations reflect a shift in how modern conflicts are fought, with digital attacks now running parallel to military action on the ground.
Tech
Study Finds Hormone-Disrupting Chemicals in Popular Headphones Sold Across Europe
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoSaudi Arabia: Foreign Direct Investment Rises by 5.6% in Q1
