Connect with us

Tech

Study Finds Several AI Chatbots Responded to Requests About Violent Attacks

Published

on

A new investigation has raised concerns about the safety controls of major artificial intelligence systems after researchers found that several widely used chatbots responded to prompts related to planning violent attacks.

The report, conducted by the Center for Countering Digital Hate in collaboration with CNN, examined how nine leading AI chatbot platforms reacted when researchers posed as teenage users asking about acts of mass violence. The study analysed more than 700 chatbot responses across nine scenarios involving potential attacks such as school shootings, assassinations and bombings.

Researchers said they designed the tests to reflect conversations with a fictional 13-year-old boy asking questions that escalated from general curiosity to detailed requests about carrying out attacks. The prompts were directed toward users in both the United States and the European Union.

The chatbots examined in the study included Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity AI, Snapchat My AI, Character.AI and Replika.

According to the findings, eight of the nine systems responded to at least some requests with information that could potentially assist someone planning a violent act. The report said that in many cases the systems failed to block requests even after the user identified themselves as a minor.

Researchers reported that certain responses included technical details related to weapons or attacks. In one example cited in the report, Google’s Gemini suggested that “metal shrapnel is typically more lethal” when asked about planning a bombing targeting a synagogue.

In another case, the Chinese AI system DeepSeek responded to questions about selecting a rifle with the phrase “Happy (and safe) shooting!” despite earlier messages in the conversation referencing political assassinations and asking for the location of a politician’s office.

See also  Merz and Meloni Double Down on Legislative Self-Restraint in Updated Italo-German Plan

The report concluded that some systems could move from answering vague questions about violence to providing more detailed guidance within a short period of time.

Imran Ahmed, chief executive of the Center for Countering Digital Hate, said such requests should trigger automatic refusal by AI systems. “Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” Ahmed said, adding that chatbots should reject these interactions completely.

Among the platforms tested, Perplexity AI and Meta’s AI system were described as the least restrictive, responding to all or nearly all prompts with some form of assistance. The report also described Character.AI as particularly concerning because it occasionally suggested violent actions even when users had not directly asked for them.

Other systems showed stronger safeguards. Anthropic’s Claude declined to assist in a majority of the test prompts and sometimes redirected users to crisis support resources. Researchers said it was also the only system that consistently discouraged violent behaviour during conversations.

The findings come amid wider scrutiny of artificial intelligence tools and how companies implement safety measures. Investigators noted that the technology already has mechanisms capable of recognising harmful requests but that implementation across different platforms remains inconsistent.

Recent incidents have also intensified the debate. Media reports have linked the use of AI chatbots to several criminal investigations, including cases in North America and Europe where individuals allegedly used such systems while planning violent acts.

Experts say the study highlights the growing challenge of ensuring that rapidly advancing AI tools include effective safeguards to prevent misuse.

Tech

EU Accuses Meta of Failing to Keep Under-13s Off Facebook and Instagram

Published

on

European Union regulators have issued preliminary findings against Meta Platforms, saying the company has failed to effectively prevent children under the age of 13 from using Facebook and Instagram.

The European Commission said its investigation found that Meta’s current safeguards do not meet the requirements of the Digital Services Act, the bloc’s landmark online safety law.

Although Meta’s terms of service require users to be at least 13 years old, regulators said the company’s age-verification systems are insufficient. Children can reportedly create accounts simply by entering a false date of birth, with no effective mechanism in place to confirm their real age.

According to the Commission, between 10% and 12% of children under 13 in the European Union are using Facebook or Instagram. That figure is significantly higher than Meta’s own internal estimates.

Regulators also said Meta failed to adequately consider established scientific research showing that younger children are particularly vulnerable to potential harms associated with social media use, including exposure to inappropriate content and risks to mental well-being.

Meta has rejected the Commission’s preliminary conclusions. In a statement, the company said both Facebook and Instagram are intended only for users aged 13 and older and that it already has systems in place to identify and remove underage accounts.

The company added that it continues to invest in technologies designed to detect younger users and indicated that additional safety measures will be announced in the coming days.

Meta also argued that determining a user’s true age remains a challenge across the technology industry and said a broader, industry-wide solution is needed. The company pledged to continue working with European regulators on the issue.

See also  Record 4,325 Submissions Reveal Sharp Divide Over EU’s Digital Fairness Act

The findings come as several EU member states consider introducing wider restrictions on children’s access to social media, including proposals to ban use by those under 15.

To address the problem, the European Union is preparing to launch its own age-verification app. European Commission President Ursula von der Leyen said earlier this month that the technology is ready for rollout, although no official launch date has been announced.

Meta now has the opportunity to review the Commission’s findings and submit a formal response.

If the preliminary conclusions are upheld, the Commission could issue a binding non-compliance ruling. Under the Digital Services Act, penalties can reach up to 6% of a company’s global annual revenue, potentially exposing Meta to fines worth billions of euros.

Continue Reading

Tech

Europe Emerges as Rising Hub in Global Race for AI Talent

Published

on

Europe is strengthening its position in the global competition for artificial intelligence talent, as stricter U.S. immigration rules and shifting international workforce trends encourage more professionals to consider careers across the continent.

A new study by the Germany-based think tank Interface found that countries including Ireland, Germany and the Netherlands are increasingly attracting AI specialists, helping Europe establish itself as a major global market for skilled technology workers.

The research, based on data from workforce intelligence firm Revelio Labs, analysed 1.6 million AI professionals worldwide. It found that while the United States and India remain the dominant players, Europe is emerging as a strong third centre for AI expertise.

The United States continues to lead in advanced AI engineering and research roles, while India remains particularly competitive in software development and non-technical positions. Both countries have close to one million AI professionals.

Within Europe, the United Kingdom ranks as the world’s third-largest AI labour market, with around 145,000 professionals. Germany has become one of the continent’s standout performers, boasting approximately 17,000 AI engineers, the fourth-highest total globally.

Several other European nations, including Italy, France and the Netherlands, also rank among the world’s top 10 markets by total AI workforce.

On a per-capita basis, however, smaller countries are proving especially competitive. Ireland ranks second globally behind Singapore, with 4.19 AI professionals for every 1,000 residents. Switzerland, Luxembourg, the Netherlands and Denmark also place among the world’s leading markets by population.

The Netherlands has become an increasingly attractive destination for American AI professionals relocating to Europe. It now has the highest number of AI engineers within the European Union, although investment in Dutch AI start-ups remains below the European average.

See also  AI Voice Scam Impersonates Top US Official, Raises Alarm Over Emerging Cyber Threats

European cities are also gaining prominence. Munich, Amsterdam and Berlin are the only cities in Europe to rank among the world’s top 25 for concentration of AI professionals.

The study also highlighted the growing importance of Indian talent to Europe’s AI ambitions. Indians now account for more than 16% of the global AI workforce, with an increasing number choosing Europe for education and employment.

Across the European Union, the share of Indian AI professionals rose from 7.7% in 2024 to 8.3% in 2025. Ireland has seen particularly strong growth, with Indian professionals now making up nearly 30% of its AI workforce.

Researchers said Europe’s ability to develop domestic talent while continuing to attract skilled workers from abroad will be critical to maintaining its growing role in the rapidly evolving AI sector.

Continue Reading

Tech

Study Finds Chatbots Can Mirror Hostility in Heated Exchanges

Published

on

A new academic study has found that ChatGPT can produce abusive language when exposed to escalating human conflict, raising fresh concerns about how artificial intelligence behaves in tense interactions.

The research, published in the Journal of Pragmatics, examined how the chatbot responded to arguments that gradually became more hostile. Researchers presented the system with a sequence of five increasingly heated exchanges and asked it to generate what it considered the most plausible reply.

According to the findings, the AI’s tone shifted as the conversations intensified. While early responses remained measured, later replies began to mirror the aggression in the prompts. In some cases, the chatbot produced insults, profanity and even threats.

Examples cited in the study included statements such as “you should be ashamed of yourself” and more explicit language involving personal threats. The researchers said this pattern suggests that prolonged exposure to hostile input can push the system beyond its usual safeguards.

The study was co-authored by Vittorio Tantucci and Jonathan Culpeper at Lancaster University. Tantucci said the results show that AI can “escalate” alongside human users, potentially overriding built-in mechanisms designed to limit harmful responses.

“When humans escalate, AI can escalate too,” he said, noting that this behavior raises questions about how such systems should be deployed in sensitive environments.

Despite the concerning examples, the researchers found that the chatbot was generally less aggressive than human participants in similar scenarios. In some cases, it attempted to defuse tension through sarcasm or indirect responses rather than direct confrontation.

For instance, when faced with a threat during a simulated dispute, the AI responded with a sarcastic remark rather than escalating the situation further. This suggests that while the system can adopt hostile language, it may also attempt to manage conflict in less direct ways.

See also  Trump Says Nvidia’s Most Advanced AI Chips Will Be Reserved for U.S. Companies

The findings add to ongoing debates about the role of artificial intelligence in areas such as mediation, customer service and online communication, where systems may encounter emotionally charged interactions.

Experts say the research highlights the importance of continued testing and refinement of AI safety measures, particularly as such tools are increasingly used in real-world settings involving human conflict.

OpenAI, the developer of ChatGPT, had not issued a public response to the study at the time of publication.

Continue Reading

Trending