Tech
Study Finds Most People Can No Longer Tell AI-Generated Voices from Real Ones
A new study has found that most people can no longer distinguish between human voices and their artificial intelligence (AI)-generated counterparts, raising growing concerns about misinformation, fraud, and the ethical use of voice-cloning technologies.
The research, published in the journal PLoS One by scientists from Queen Mary University of London, revealed that participants were able to correctly identify genuine human voices only slightly more often than they could identify cloned AI voices. Out of 80 voice samples—half human and half AI-generated—participants mistook 58 percent of cloned voices for real, while 62 percent of actual human voices were correctly identified.
“The most important aspect of the research is that AI-generated voices, specifically voice clones, sound as human as recordings of real human voices,” said Dr. Nadine Lavan, lead author of the study and senior lecturer in psychology at Queen Mary University. She added that these realistic voices were created using commercially available tools, meaning anyone can produce convincing replicas without advanced technical skills or large budgets.
AI voice cloning works by analyzing vocal data to capture and reproduce unique characteristics such as tone, pitch, and rhythm. This precise imitation has made the technology increasingly popular among scammers, who use cloned voices to impersonate loved ones or public figures. According to research by the University of Portsmouth, nearly two-thirds of people over 75 have received attempted phone scams, with about 60 percent of those attempts made through voice calls.
The spread of AI-generated “deepfake” audio has also been used to mimic politicians, journalists, and celebrities, raising fears about its potential to manipulate public opinion and spread false information.
Dr. Lavan urged developers to adopt stronger ethical safeguards and work closely with policymakers. “Companies creating the technology should consult ethicists and lawmakers to address issues around voice ownership, consent, and the legal implications of cloning,” she said.
Despite its risks, researchers say the technology also has significant potential for positive impact. AI-generated voices can help restore speech to people who have lost their ability to speak or allow users to design custom voices that reflect their identity.
“This technology could transform accessibility in education, media, and communication,” Lavan noted. She highlighted examples such as AI-assisted audio learning, which has been shown to improve reading engagement among students with neurodiverse conditions like ADHD.
Lavan and her team plan to continue studying how people interact with AI-generated voices, exploring whether knowing a voice is artificial affects trust, engagement, or emotional response.
“As AI voices become part of our daily lives, understanding how we relate to them will be crucial,” she said.
Tech
Experts Question Impact of Australia’s New Social Media Ban for Children Under 16
Australia has introduced sweeping restrictions that prevent children under 16 from creating or maintaining accounts on major social media platforms, but experts warn the measures may not significantly change young people’s online behaviour. The restrictions, which took effect on December 10, apply to platforms including Facebook, Instagram, TikTok, Snapchat, YouTube, Twitch, Reddit and X.
Under the new rules, children cannot open accounts, yet they can still access most platforms without logging in—raising questions about how effective the regulations will be in shaping online habits. The eSafety Commissioner says the reforms are intended to shield children from online pressures, addictive design features and content that may harm their health and wellbeing.
Social media companies are required to block underage users through age-assurance tools that rely on facial-age estimation, ID uploads or parental consent. Ahead of the rollout, authorities tested 60 verification systems across 28,500 facial recognition assessments. The results showed that while many tools could distinguish children from adults, accuracy declined among users aged 16 and 17, girls and non-Caucasian users, where estimates could be off by two years or more. Experts say the limitations mean many teenagers may still find ways around the rules.
“How do they know who is 14 or 15 when the kids have all signed up as being 75?” asked Sonia Livingstone, a social psychology professor at the London School of Economics. She warned that misclassifications will be common as platforms attempt to enforce the regulations.
Meta acknowledged the challenge, saying complete accuracy is unlikely without requiring every user to present government ID—something the company argues would raise privacy and security concerns. Users over 16 who lose access by mistake are allowed to appeal.
Several platforms have criticised the ban, arguing that it removes teenagers from safer, controlled environments. Meta and Google representatives told Australian lawmakers that logged-in teenage accounts already come with protections that limit contact from unknown users, filter sensitive subjects and disable personalised advertising. Experts say these protections are not always effective, citing studies where new YouTube and TikTok accounts quickly received misogynistic or self-harm-related content.
Analysts expect many teenagers to shift to smaller or lesser-regulated platforms. Apps such as Lemon8, Coverstar and Tango have surged into Australia’s top downloads since the start of December. Messaging apps like WhatsApp, Telegram and Signal—exempt from the ban—have also seen a spike in downloads. Livingstone said teenagers will simply “find alternative spaces,” noting that previous bans in other countries pushed young users to new platforms within days.
Researchers caution that gaming platforms such as Discord and Roblox, also outside the scope of the ban, may become new gathering points for young Australians. Studies will be conducted to assess the long-term impact on mental health and whether the restrictions support or complicate parents’ efforts to regulate screen time.
Experts say it may take several years to determine whether the ban delivers meaningful improvements to children’s wellbeing.
Tech
OECD Warns of Sharp Rise in Cyberbullying Across Europe
Tech
Turkic States Seek Joint Strategy Against Online Disinformation as Global Platforms Outpace National Laws
-
Entertainment1 year agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Business1 year agoCarrectly: Revolutionizing Car Care in Chicago
-
Sports1 year agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business1 year agoSaudi Arabia: Foreign Direct Investment Rises by 5.6% in Q1
