Tech
Report reveals AI-generated videos of children circulating on TikTok, linked to illegal content on Telegram
A recent investigation has found that AI-generated videos showing young girls in sexualised clothing or suggestive poses have gained widespread attention on TikTok, raising serious concerns about child exploitation online. The Spanish fact-checking organisation Maldita analysed over 5,200 videos across more than 20 accounts, which collectively have more than 550,000 followers and nearly six million likes. Many videos featured girls in bikinis, school uniforms, or tight clothing.
Maldita’s analysis also revealed that comments on these videos contained links to external platforms, including Telegram communities that sell child pornography. The organisation reported 12 such groups to Spanish authorities. The TikTok accounts involved were generating revenue through the platform’s subscription model, which pays creators monthly fees for access to their content. TikTok receives about half of the profits under this arrangement.
The report comes amid global efforts to protect minors online. Countries including Australia, Denmark, and the European Union are introducing or considering restrictions for users under 16, with the goal of curbing exposure to harmful content. TikTok’s own policies require creators to label AI-generated content and allow the removal of content considered harmful to individuals. Despite this, Maldita found that most of the videos it examined did not include any AI identifiers or watermarks. Some content, however, displayed the platform’s “TikTok AI Alive” watermark, which is automatically applied when still images are converted into videos.
In response to the findings, both Telegram and TikTok emphasised their commitment to preventing child sexual abuse material. Telegram stated that it scans all media on its public platform against previously removed content to prevent its spread. In 2025 alone, the platform removed over 909,000 groups and channels containing child sexual abuse material.
TikTok said 99 percent of content harmful to minors is removed automatically, with another 97 percent of AI-generated offending content being proactively taken down. The platform said it immediately suppresses or closes accounts that share sexually explicit content involving children and reports them to the United States’ National Center for Missing and Exploited Children (NCMEC). TikTok also told CNN that between April and June 2025, it removed more than 189 million videos and banned over 108 million accounts.
Maldita’s report highlights the challenges social media platforms face in policing AI-generated content and preventing the exploitation of children. Experts warn that while automated tools and moderation can reduce the spread of illegal material, vigilance by authorities, parents, and platforms remains critical to protect minors in an increasingly digital environment.
Tech
EU Accuses Meta of Failing to Keep Under-13s Off Facebook and Instagram
Tech
Europe Emerges as Rising Hub in Global Race for AI Talent
Tech
Study Finds Chatbots Can Mirror Hostility in Heated Exchanges
A new academic study has found that ChatGPT can produce abusive language when exposed to escalating human conflict, raising fresh concerns about how artificial intelligence behaves in tense interactions.
The research, published in the Journal of Pragmatics, examined how the chatbot responded to arguments that gradually became more hostile. Researchers presented the system with a sequence of five increasingly heated exchanges and asked it to generate what it considered the most plausible reply.
According to the findings, the AI’s tone shifted as the conversations intensified. While early responses remained measured, later replies began to mirror the aggression in the prompts. In some cases, the chatbot produced insults, profanity and even threats.
Examples cited in the study included statements such as “you should be ashamed of yourself” and more explicit language involving personal threats. The researchers said this pattern suggests that prolonged exposure to hostile input can push the system beyond its usual safeguards.
The study was co-authored by Vittorio Tantucci and Jonathan Culpeper at Lancaster University. Tantucci said the results show that AI can “escalate” alongside human users, potentially overriding built-in mechanisms designed to limit harmful responses.
“When humans escalate, AI can escalate too,” he said, noting that this behavior raises questions about how such systems should be deployed in sensitive environments.
Despite the concerning examples, the researchers found that the chatbot was generally less aggressive than human participants in similar scenarios. In some cases, it attempted to defuse tension through sarcasm or indirect responses rather than direct confrontation.
For instance, when faced with a threat during a simulated dispute, the AI responded with a sarcastic remark rather than escalating the situation further. This suggests that while the system can adopt hostile language, it may also attempt to manage conflict in less direct ways.
The findings add to ongoing debates about the role of artificial intelligence in areas such as mediation, customer service and online communication, where systems may encounter emotionally charged interactions.
Experts say the research highlights the importance of continued testing and refinement of AI safety measures, particularly as such tools are increasingly used in real-world settings involving human conflict.
OpenAI, the developer of ChatGPT, had not issued a public response to the study at the time of publication.
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
