Tech
Militant Groups Adopt AI to Spread Propaganda and Boost Recruitment
Extremist organisations have begun using artificial intelligence (AI) to create realistic images, videos, and audio in efforts to recruit members and amplify their influence, national security experts warn. Since programs such as ChatGPT became widely accessible, militant groups have increasingly experimented with generative AI, despite being unsure how to fully exploit its potential.
Recent reports show that individuals linked to the Islamic State (IS) have encouraged supporters to integrate AI into their operations. One post on a pro-IS forum urged users to make “AI part of their operations,” noting its ease of use and potential to cause concern among intelligence agencies.
IS, which once controlled territory in Iraq and Syria, is now a decentralized network of groups and individuals sharing a violent ideology. The organisation recognized years ago that social media could be a powerful recruitment and propaganda tool, making AI a natural extension of its digital tactics. Even poorly resourced groups or individual actors can now use AI to produce deepfakes and other fabricated content at scale, widening their reach and impact.
“For any adversary, AI really makes it much easier to do things,” said John Laliberte, former NSA vulnerability researcher and CEO of cybersecurity firm ClearVector. “With AI, even a small group that doesn’t have a lot of money is still able to make an impact.”
Militant groups have already used AI-generated content to influence public perception. Two years ago, during the Israel-Hamas conflict, fabricated images showing bloodied children in bombed-out buildings circulated widely online, stirring outrage and polarising audiences. Last year, following an IS-affiliated attack at a Russian concert that killed nearly 140 people, AI-crafted propaganda videos spread rapidly on social media and discussion boards. IS has also produced deepfake audio of leaders reciting scripture and quickly translated messages into multiple languages.
Experts caution that, while extremist groups are still behind nations like China, Russia, or Iran in sophisticated AI applications, their use of the technology is considered “aspirational” but dangerous. Hackers are already using synthetic media for phishing attacks, and AI can also help write malicious code or automate parts of cyberattacks. Homeland security agencies warn that militants could one day use AI to compensate for technical limitations in producing biological or chemical weapons.
Lawmakers are seeking to address the growing threat. Senator Mark Warner of Virginia stressed the need for AI developers to share information on misuse by extremists, hackers, or foreign spies. House legislation now requires homeland security officials to assess the risks AI poses to terrorist groups annually. Representative August Pfluger, who sponsored the bill, said policies must evolve to counter emerging threats.
Marcus Fowler, former CIA agent and CEO of Darktrace Federal, highlighted the urgency: “ISIS got on Twitter early and found ways to use social media to their advantage. They are always looking for the next thing to add to their arsenal.”
As AI becomes increasingly powerful and accessible, security experts warn that militant groups’ ability to manipulate the technology for recruitment, propaganda, and cyber operations is a threat that governments and tech companies cannot ignore.
Tech
EU Accuses Meta of Failing to Keep Under-13s Off Facebook and Instagram
Tech
Europe Emerges as Rising Hub in Global Race for AI Talent
Tech
Study Finds Chatbots Can Mirror Hostility in Heated Exchanges
A new academic study has found that ChatGPT can produce abusive language when exposed to escalating human conflict, raising fresh concerns about how artificial intelligence behaves in tense interactions.
The research, published in the Journal of Pragmatics, examined how the chatbot responded to arguments that gradually became more hostile. Researchers presented the system with a sequence of five increasingly heated exchanges and asked it to generate what it considered the most plausible reply.
According to the findings, the AI’s tone shifted as the conversations intensified. While early responses remained measured, later replies began to mirror the aggression in the prompts. In some cases, the chatbot produced insults, profanity and even threats.
Examples cited in the study included statements such as “you should be ashamed of yourself” and more explicit language involving personal threats. The researchers said this pattern suggests that prolonged exposure to hostile input can push the system beyond its usual safeguards.
The study was co-authored by Vittorio Tantucci and Jonathan Culpeper at Lancaster University. Tantucci said the results show that AI can “escalate” alongside human users, potentially overriding built-in mechanisms designed to limit harmful responses.
“When humans escalate, AI can escalate too,” he said, noting that this behavior raises questions about how such systems should be deployed in sensitive environments.
Despite the concerning examples, the researchers found that the chatbot was generally less aggressive than human participants in similar scenarios. In some cases, it attempted to defuse tension through sarcasm or indirect responses rather than direct confrontation.
For instance, when faced with a threat during a simulated dispute, the AI responded with a sarcastic remark rather than escalating the situation further. This suggests that while the system can adopt hostile language, it may also attempt to manage conflict in less direct ways.
The findings add to ongoing debates about the role of artificial intelligence in areas such as mediation, customer service and online communication, where systems may encounter emotionally charged interactions.
Experts say the research highlights the importance of continued testing and refinement of AI safety measures, particularly as such tools are increasingly used in real-world settings involving human conflict.
OpenAI, the developer of ChatGPT, had not issued a public response to the study at the time of publication.
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
