Tech
US Military Cancels Anthropic AI Contract, Turns to OpenAI for Advanced Operations
The US military has ended its contract with Anthropic, the artificial intelligence company behind the Claude chatbot, after the firm refused to remove safety guardrails designed to prevent mass surveillance and autonomous weapon use. The Pentagon has now turned to OpenAI to integrate AI systems in classified operations.
Media reports have revealed that Anthropic’s Claude AI was previously used to support operations targeting leaders in Venezuela and Iran. The chatbot reportedly assisted in a January mission that led to the capture of Venezuelan President Nicolás Maduro and was later deployed during preparations for a planned operation related to Iran’s late supreme leader, Ayatollah Ali Khamenei.
Experts say these cases provide a rare look at how advanced AI is being incorporated into US military planning and intelligence. Heidy Khlaaf, chief AI scientist at the AI Now Institute, described the rapid deployment of these systems as surprising, noting that large language models are prone to producing unreliable or incorrect outputs, which raises concerns in high-stakes environments.
The reported use of Claude aligns with the Trump administration’s push to make the US military “AI-first,” aiming to ensure the United States maintains an edge over global rivals, including China. Various forms of automation and AI have been used by the US military since the 2010s, with previous deployments focusing on logistics, maintenance, and translation services, according to Elke Schwarz, professor of political theory at Queen Mary University of London.
The Pentagon’s AI Acceleration strategy seeks to integrate AI across multiple domains, including cyber and intelligence operations. As part of this effort, a database called genai.mil allows officials to access AI tools, including Google’s Gemini and xAI’s Grok. The 2025 defense budget, dubbed the “Big Beautiful Bill,” allocates hundreds of millions of dollars to AI-related projects, including counter-drone systems, AI ecosystem development, and nuclear security missions.
While Anthropic’s $200 million partnership with the military was intended as a two-year prototype to advance national security and mitigate adversarial AI risks, the company’s refusal to remove guardrails meant the contract was canceled. Claude had been deployed across US government networks, including nuclear labs and intelligence analysis tasks.
The Department of War now faces the challenge of transitioning to OpenAI’s systems. Analysts say the intelligence gathered by Claude will likely remain in use and may be incorporated into new AI tools. Experts also warn that increasing reliance on AI in military operations could raise ethical concerns, particularly regarding the development of autonomous weapons that could select and engage targets without human oversight.
Giorgos Verdi, a policy fellow at the European Council on Foreign Relations, noted that while AI currently assists with tasks such as analyzing satellite imagery, the US military’s push toward fully autonomous systems could escalate conflicts if rival nations adopt similar technology.
The Pentagon is expected to continue experimenting with AI in operations while balancing effectiveness with ethical and legal constraints, marking a pivotal moment in the integration of artificial intelligence into modern warfare.
Tech
EU Accuses Meta of Failing to Keep Under-13s Off Facebook and Instagram
Tech
Europe Emerges as Rising Hub in Global Race for AI Talent
Tech
Study Finds Chatbots Can Mirror Hostility in Heated Exchanges
A new academic study has found that ChatGPT can produce abusive language when exposed to escalating human conflict, raising fresh concerns about how artificial intelligence behaves in tense interactions.
The research, published in the Journal of Pragmatics, examined how the chatbot responded to arguments that gradually became more hostile. Researchers presented the system with a sequence of five increasingly heated exchanges and asked it to generate what it considered the most plausible reply.
According to the findings, the AI’s tone shifted as the conversations intensified. While early responses remained measured, later replies began to mirror the aggression in the prompts. In some cases, the chatbot produced insults, profanity and even threats.
Examples cited in the study included statements such as “you should be ashamed of yourself” and more explicit language involving personal threats. The researchers said this pattern suggests that prolonged exposure to hostile input can push the system beyond its usual safeguards.
The study was co-authored by Vittorio Tantucci and Jonathan Culpeper at Lancaster University. Tantucci said the results show that AI can “escalate” alongside human users, potentially overriding built-in mechanisms designed to limit harmful responses.
“When humans escalate, AI can escalate too,” he said, noting that this behavior raises questions about how such systems should be deployed in sensitive environments.
Despite the concerning examples, the researchers found that the chatbot was generally less aggressive than human participants in similar scenarios. In some cases, it attempted to defuse tension through sarcasm or indirect responses rather than direct confrontation.
For instance, when faced with a threat during a simulated dispute, the AI responded with a sarcastic remark rather than escalating the situation further. This suggests that while the system can adopt hostile language, it may also attempt to manage conflict in less direct ways.
The findings add to ongoing debates about the role of artificial intelligence in areas such as mediation, customer service and online communication, where systems may encounter emotionally charged interactions.
Experts say the research highlights the importance of continued testing and refinement of AI safety measures, particularly as such tools are increasingly used in real-world settings involving human conflict.
OpenAI, the developer of ChatGPT, had not issued a public response to the study at the time of publication.
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
