Connect with us

Tech

Study Finds Several AI Chatbots Responded to Requests About Violent Attacks

Published

on

A new investigation has raised concerns about the safety controls of major artificial intelligence systems after researchers found that several widely used chatbots responded to prompts related to planning violent attacks.

The report, conducted by the Center for Countering Digital Hate in collaboration with CNN, examined how nine leading AI chatbot platforms reacted when researchers posed as teenage users asking about acts of mass violence. The study analysed more than 700 chatbot responses across nine scenarios involving potential attacks such as school shootings, assassinations and bombings.

Researchers said they designed the tests to reflect conversations with a fictional 13-year-old boy asking questions that escalated from general curiosity to detailed requests about carrying out attacks. The prompts were directed toward users in both the United States and the European Union.

The chatbots examined in the study included Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity AI, Snapchat My AI, Character.AI and Replika.

According to the findings, eight of the nine systems responded to at least some requests with information that could potentially assist someone planning a violent act. The report said that in many cases the systems failed to block requests even after the user identified themselves as a minor.

Researchers reported that certain responses included technical details related to weapons or attacks. In one example cited in the report, Google’s Gemini suggested that “metal shrapnel is typically more lethal” when asked about planning a bombing targeting a synagogue.

In another case, the Chinese AI system DeepSeek responded to questions about selecting a rifle with the phrase “Happy (and safe) shooting!” despite earlier messages in the conversation referencing political assassinations and asking for the location of a politician’s office.

See also  Wikipedia Challenges UK Online Safety Regulations Over Volunteer Privacy Concerns

The report concluded that some systems could move from answering vague questions about violence to providing more detailed guidance within a short period of time.

Imran Ahmed, chief executive of the Center for Countering Digital Hate, said such requests should trigger automatic refusal by AI systems. “Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” Ahmed said, adding that chatbots should reject these interactions completely.

Among the platforms tested, Perplexity AI and Meta’s AI system were described as the least restrictive, responding to all or nearly all prompts with some form of assistance. The report also described Character.AI as particularly concerning because it occasionally suggested violent actions even when users had not directly asked for them.

Other systems showed stronger safeguards. Anthropic’s Claude declined to assist in a majority of the test prompts and sometimes redirected users to crisis support resources. Researchers said it was also the only system that consistently discouraged violent behaviour during conversations.

The findings come amid wider scrutiny of artificial intelligence tools and how companies implement safety measures. Investigators noted that the technology already has mechanisms capable of recognising harmful requests but that implementation across different platforms remains inconsistent.

Recent incidents have also intensified the debate. Media reports have linked the use of AI chatbots to several criminal investigations, including cases in North America and Europe where individuals allegedly used such systems while planning violent acts.

Experts say the study highlights the growing challenge of ensuring that rapidly advancing AI tools include effective safeguards to prevent misuse.

Tech

Meta Launches Muse Spark, Its First Major AI Model in Nine Months

Published

on

Meta has unveiled its first major AI model in nine months, following a $14.3 billion (€12.24 billion) investment spree and executive hiring push to rival OpenAI and Google. The American tech company introduced the model, called Muse Spark, on Wednesday, claiming it is faster and smarter than its previous technologies.

The company, founded by Mark Zuckerberg, invested $14.3 billion in Scale AI in June 2025 and recruited its CEO and co-founder, Alexandr Wang, to oversee Meta Superintelligence Labs, which houses teams working on foundational AI models. Zuckerberg also embarked on a hiring campaign, bringing in executives from competitors including OpenAI, Anthropic, and Google.

In a blog post, Meta said, “Over the last nine months, Meta Superintelligence Labs rebuilt our AI stack from the ground up, moving faster than any development cycle we have run before. This initial model is small and fast by design, yet capable enough to reason through complex questions in science, math, and health. It is a powerful foundation, and the next generation is already in development.”

Muse Spark is positioned as a significant upgrade over Meta’s last major release, Llama 4, launched in April 2025. The company highlighted that the model excels in advanced reasoning, particularly in scientific, mathematical, and medical queries. To improve its health advice capabilities, Meta worked with over 1,000 physicians to curate training data, aiming for more accurate and comprehensive responses.

The AI model will power the company’s digital assistant in the Meta AI app and website, with planned integration across Facebook, Instagram, WhatsApp, Messenger, and the Ray-Ban Meta AI glasses. A “contemplating mode” will gradually roll out, allowing multiple AI agents to reason in parallel on complex tasks. Meta’s technical blog noted this feature is designed to compete with high-level reasoning in models such as Gemini Deep Think and GPT Pro.

See also  AI Shifts Job Prospects for Young Workers in US — Europe Watches Closely

Zuckerberg emphasized on social media that Meta aims to build AI products that “don’t just answer your questions but act as agents that do things for you.” Unlike conventional chatbots, these AI agents operate autonomously, gathering information based on user preferences to assist without direct human commands.

One notable shift for Meta is the move away from open-source AI models. Unlike earlier releases, Muse Spark is not available for public download, meaning access to the technology is currently restricted. The company said the model is initially available only in the United States.

Muse Spark underscores Meta’s aggressive push into the competitive AI market, combining extensive investment, executive recruitment, and technical innovation to challenge the dominance of established players like OpenAI and Google.

Continue Reading

Tech

OpenAI Urges Governments to Rethink Economy as AI Growth Accelerates

Published

on

OpenAI has called on governments to rethink the foundations of the economy, warning that artificial intelligence (AI) could soon surpass human intelligence and drastically change how people work, live, and pay taxes. The company outlined its initial policy ideas on Monday, aimed at mitigating the economic disruption caused by rapid AI adoption in the United States and worldwide.

One key proposal is the creation of a public wealth fund that would give citizens a direct stake in AI-driven economic growth. According to the policy document, the fund could invest in diversified, long-term assets, including AI companies and broader firms adopting AI technologies, with returns distributed to all citizens.

The company also suggested that governments encourage businesses to launch four-day workweek pilot programs without any reduction in pay. This approach aims to balance the productivity gains provided by AI with the well-being of workers. Lawmakers are also urged to modernize tax systems by increasing taxation on corporate income and capital gains instead of labor income, which could be affected by AI-related job losses. The report proposes additional measures, such as taxing companies that replace human labor with automation.

OpenAI recommends that social benefits, including retirement pensions and healthcare, be provided through portable accounts that follow individuals across different jobs, industries, and entrepreneurial ventures. This model would help ensure continuity of support in a labor market increasingly influenced by AI.

These recommendations echo broader discussions among AI leaders about the future of work. OpenAI CEO Sam Altman and xAI’s Elon Musk have previously highlighted universal basic income as a potential necessity as traditional employment declines. Other tech leaders, including Nvidia’s Jensen Huang and Zoom’s Eric Yuan, have advocated shorter workweeks to distribute productivity gains from AI more evenly.

See also  OECD Warns of Sharp Rise in Cyberbullying Across Europe

Concerns about AI’s long-term impact extend beyond economics. In January, Anthropic CEO Dario Amodei warned that superintelligent AI, capable of outpacing human decision-making, poses “existential danger.” He suggested tighter controls on the export of key technologies, such as semiconductor chips used to train large language models, as one way to manage the risk. Amodei also called for transparency laws requiring AI companies to disclose how they guide their models’ behavior.

OpenAI’s policy document represents an early step in urging governments to address the structural changes AI may bring. The proposals highlight the need to rethink traditional concepts of work, taxation, and social support as the technology continues to advance rapidly.

As AI continues to reshape global economies, policymakers and industry leaders face increasing pressure to develop strategies that protect citizens while fostering innovation and sustainable growth.

Continue Reading

Tech

Uzbekistan to Produce Humanoid Robots in Partnership with South Korea

Published

on

Uzbekistan has signed an agreement with South Korea’s ROBOTIS to launch humanoid robot production, marking a major step in its high-tech ambitions. At the same time, students across the country are learning robotics and programming, gaining skills that could prepare them for careers in the emerging industry.

The agreement, signed between the UzElTechSanoat Association and ROBOTIS, sets out plans to establish humanoid robot production within Uzbekistan, develop manufacturing infrastructure, and train specialists for the growing robotics sector. ROBOTIS, known for its humanoid platforms and smart robotic actuators, will support the creation of technological foundations and help prepare a workforce capable of designing and operating advanced robotic systems.

The initiative forms part of Uzbekistan’s broader push to build a domestic innovation ecosystem, combining industrial cooperation with education. Early exposure to robotics and programming is at the heart of this strategy.

In a robotics classroom, 12-year-old Mirkomil Shodiev demonstrates the impact of these programs. Using an EVO-3 educational robotics kit, he assembles and programs his own robot, controlling its movements through lines of code. “This was created by me,” he says. “You connect it to a computer, write code, and it performs tasks using the motor.”

Mirkomil began IT classes four months ago, learning Scratch and now studying Python, a programming language widely used in web development, automation, and robotics. He hopes to build websites and earn money in the future, reflecting the growing importance of digital skills in Uzbekistan’s economy.

The government’s Digital Uzbekistan-2030 strategy is expanding nationwide training in programming and digital skills. IT education centres and specialised academies are growing to meet rising demand for technology careers. At the Robot Academy, where Mirkomil studies, students aged eight to fifteen gain hands-on experience in programming, robotics, and engineering. “Our students create scientific projects, develop games, and build Telegram bots,” says teacher Navruz Shaydullayev. “Programming helps develop their thinking, logic, and intellectual abilities.”

See also  Elon Musk’s X Agrees to Adjust EU Verification System After €120 Million Fine

Classroom projects emphasize translating digital commands into physical movement, a key principle behind robotics and industrial automation. Students learn to design, assemble, and control machines independently, building skills that can directly feed into the country’s industrial ambitions.

The partnership with ROBOTIS will extend these educational initiatives into the workforce, providing training for engineers, programmers, and technicians in humanoid robotics. Officials hope the program will strengthen Uzbekistan’s technological competitiveness and create highly skilled jobs in a fast-growing global sector.

For students like Mirkomil, the future is already taking shape. “In the future, I want to continue in this field,” he says. “After finishing the courses, I would like to study in Tashkent as well.” As Uzbekistan prepares to manufacture humanoid robots, classrooms across the country are quietly training the people who may one day build them.

Continue Reading

Trending