Tech
Militant Groups Adopt AI to Spread Propaganda and Boost Recruitment
Extremist organisations have begun using artificial intelligence (AI) to create realistic images, videos, and audio in efforts to recruit members and amplify their influence, national security experts warn. Since programs such as ChatGPT became widely accessible, militant groups have increasingly experimented with generative AI, despite being unsure how to fully exploit its potential.
Recent reports show that individuals linked to the Islamic State (IS) have encouraged supporters to integrate AI into their operations. One post on a pro-IS forum urged users to make “AI part of their operations,” noting its ease of use and potential to cause concern among intelligence agencies.
IS, which once controlled territory in Iraq and Syria, is now a decentralized network of groups and individuals sharing a violent ideology. The organisation recognized years ago that social media could be a powerful recruitment and propaganda tool, making AI a natural extension of its digital tactics. Even poorly resourced groups or individual actors can now use AI to produce deepfakes and other fabricated content at scale, widening their reach and impact.
“For any adversary, AI really makes it much easier to do things,” said John Laliberte, former NSA vulnerability researcher and CEO of cybersecurity firm ClearVector. “With AI, even a small group that doesn’t have a lot of money is still able to make an impact.”
Militant groups have already used AI-generated content to influence public perception. Two years ago, during the Israel-Hamas conflict, fabricated images showing bloodied children in bombed-out buildings circulated widely online, stirring outrage and polarising audiences. Last year, following an IS-affiliated attack at a Russian concert that killed nearly 140 people, AI-crafted propaganda videos spread rapidly on social media and discussion boards. IS has also produced deepfake audio of leaders reciting scripture and quickly translated messages into multiple languages.
Experts caution that, while extremist groups are still behind nations like China, Russia, or Iran in sophisticated AI applications, their use of the technology is considered “aspirational” but dangerous. Hackers are already using synthetic media for phishing attacks, and AI can also help write malicious code or automate parts of cyberattacks. Homeland security agencies warn that militants could one day use AI to compensate for technical limitations in producing biological or chemical weapons.
Lawmakers are seeking to address the growing threat. Senator Mark Warner of Virginia stressed the need for AI developers to share information on misuse by extremists, hackers, or foreign spies. House legislation now requires homeland security officials to assess the risks AI poses to terrorist groups annually. Representative August Pfluger, who sponsored the bill, said policies must evolve to counter emerging threats.
Marcus Fowler, former CIA agent and CEO of Darktrace Federal, highlighted the urgency: “ISIS got on Twitter early and found ways to use social media to their advantage. They are always looking for the next thing to add to their arsenal.”
As AI becomes increasingly powerful and accessible, security experts warn that militant groups’ ability to manipulate the technology for recruitment, propaganda, and cyber operations is a threat that governments and tech companies cannot ignore.
Tech
Report reveals AI-generated videos of children circulating on TikTok, linked to illegal content on Telegram
A recent investigation has found that AI-generated videos showing young girls in sexualised clothing or suggestive poses have gained widespread attention on TikTok, raising serious concerns about child exploitation online. The Spanish fact-checking organisation Maldita analysed over 5,200 videos across more than 20 accounts, which collectively have more than 550,000 followers and nearly six million likes. Many videos featured girls in bikinis, school uniforms, or tight clothing.
Maldita’s analysis also revealed that comments on these videos contained links to external platforms, including Telegram communities that sell child pornography. The organisation reported 12 such groups to Spanish authorities. The TikTok accounts involved were generating revenue through the platform’s subscription model, which pays creators monthly fees for access to their content. TikTok receives about half of the profits under this arrangement.
The report comes amid global efforts to protect minors online. Countries including Australia, Denmark, and the European Union are introducing or considering restrictions for users under 16, with the goal of curbing exposure to harmful content. TikTok’s own policies require creators to label AI-generated content and allow the removal of content considered harmful to individuals. Despite this, Maldita found that most of the videos it examined did not include any AI identifiers or watermarks. Some content, however, displayed the platform’s “TikTok AI Alive” watermark, which is automatically applied when still images are converted into videos.
In response to the findings, both Telegram and TikTok emphasised their commitment to preventing child sexual abuse material. Telegram stated that it scans all media on its public platform against previously removed content to prevent its spread. In 2025 alone, the platform removed over 909,000 groups and channels containing child sexual abuse material.
TikTok said 99 percent of content harmful to minors is removed automatically, with another 97 percent of AI-generated offending content being proactively taken down. The platform said it immediately suppresses or closes accounts that share sexually explicit content involving children and reports them to the United States’ National Center for Missing and Exploited Children (NCMEC). TikTok also told CNN that between April and June 2025, it removed more than 189 million videos and banned over 108 million accounts.
Maldita’s report highlights the challenges social media platforms face in policing AI-generated content and preventing the exploitation of children. Experts warn that while automated tools and moderation can reduce the spread of illegal material, vigilance by authorities, parents, and platforms remains critical to protect minors in an increasingly digital environment.
Tech
Cambridge Index Reveals Global Black Market for Fake Social Media Verifications
A new index developed by the University of Cambridge has revealed the scale and affordability of the underground market for fake social media account verifications, raising fresh concerns about online manipulation and digital security. According to researchers, fake verification badges can be purchased for as little as eight cents, enabling the rapid creation of networks that imitate authentic users across major online platforms.
The Cambridge Online Trust and Safety Index (COTSI), launched on Thursday, is described as the first global tool capable of tracking real-time prices for verifying fraudulent accounts. The index monitors more than 500 platforms, including TikTok, Instagram, Amazon, Spotify and Uber. By analysing data from sellers operating across the dark web and black-market channels, the project highlights how accessible and inexpensive these services have become.
Researchers say the low cost of creating fake accounts is contributing to the rise of “bot armies” — large groups of automated or semi-automated profiles designed to mimic genuine human activity. These networks can distort online conversations, amplify misleading content, and promote scams or commercial products. They can also be deployed to influence political messaging, creating an illusion of public support or opposition during major events such as elections or policy debates.
The team behind the index said the findings come at a sensitive time for governments and regulators working to contain misinformation. Many popular platforms have reduced investment in content monitoring during the past two years, while others have introduced programmes that reward users for generating high volumes of engagement. Researchers warn that such incentives may encourage the use of artificially inflated interactions, making fake accounts even more valuable to those seeking influence.
According to Cambridge analysts, the market for fraudulent verification has become highly sophisticated. Sellers offer tiered packages, guaranteeing features such as blue-badge symbols, verified rankings or the appearance of longstanding account history. Prices vary by platform and country, but the index shows that even the most complex packages remain within easy reach for groups attempting to manipulate public debate or carry out coordinated campaigns.
The launch of COTSI marks the first attempt to document these prices on a global scale. By presenting live data on the cost of creating fake identities, researchers hope to give policymakers, technology companies and security agencies a clearer picture of how digital manipulation is evolving. The study’s authors stress that tracking these markets is essential for understanding the risks posed by unauthenticated accounts, particularly during periods of political tension.
The university said the index will be updated regularly and will remain publicly accessible as part of its efforts to strengthen digital transparency worldwide.
Tech
Experts Question Impact of Australia’s New Social Media Ban for Children Under 16
Australia has introduced sweeping restrictions that prevent children under 16 from creating or maintaining accounts on major social media platforms, but experts warn the measures may not significantly change young people’s online behaviour. The restrictions, which took effect on December 10, apply to platforms including Facebook, Instagram, TikTok, Snapchat, YouTube, Twitch, Reddit and X.
Under the new rules, children cannot open accounts, yet they can still access most platforms without logging in—raising questions about how effective the regulations will be in shaping online habits. The eSafety Commissioner says the reforms are intended to shield children from online pressures, addictive design features and content that may harm their health and wellbeing.
Social media companies are required to block underage users through age-assurance tools that rely on facial-age estimation, ID uploads or parental consent. Ahead of the rollout, authorities tested 60 verification systems across 28,500 facial recognition assessments. The results showed that while many tools could distinguish children from adults, accuracy declined among users aged 16 and 17, girls and non-Caucasian users, where estimates could be off by two years or more. Experts say the limitations mean many teenagers may still find ways around the rules.
“How do they know who is 14 or 15 when the kids have all signed up as being 75?” asked Sonia Livingstone, a social psychology professor at the London School of Economics. She warned that misclassifications will be common as platforms attempt to enforce the regulations.
Meta acknowledged the challenge, saying complete accuracy is unlikely without requiring every user to present government ID—something the company argues would raise privacy and security concerns. Users over 16 who lose access by mistake are allowed to appeal.
Several platforms have criticised the ban, arguing that it removes teenagers from safer, controlled environments. Meta and Google representatives told Australian lawmakers that logged-in teenage accounts already come with protections that limit contact from unknown users, filter sensitive subjects and disable personalised advertising. Experts say these protections are not always effective, citing studies where new YouTube and TikTok accounts quickly received misogynistic or self-harm-related content.
Analysts expect many teenagers to shift to smaller or lesser-regulated platforms. Apps such as Lemon8, Coverstar and Tango have surged into Australia’s top downloads since the start of December. Messaging apps like WhatsApp, Telegram and Signal—exempt from the ban—have also seen a spike in downloads. Livingstone said teenagers will simply “find alternative spaces,” noting that previous bans in other countries pushed young users to new platforms within days.
Researchers caution that gaming platforms such as Discord and Roblox, also outside the scope of the ban, may become new gathering points for young Australians. Studies will be conducted to assess the long-term impact on mental health and whether the restrictions support or complicate parents’ efforts to regulate screen time.
Experts say it may take several years to determine whether the ban delivers meaningful improvements to children’s wellbeing.
-
Entertainment1 year agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Business1 year agoCarrectly: Revolutionizing Car Care in Chicago
-
Sports1 year agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business1 year agoSaudi Arabia: Foreign Direct Investment Rises by 5.6% in Q1
