Tech
Cybersecurity Experts Warn of Risks in AI Caricature Trend
The latest AI-generated caricature trend, in which users upload images to chatbots like ChatGPT, could pose serious security risks, cybersecurity experts have warned. Images uploaded to AI chatbots could be retained for an unknown amount of time and, if in the wrong hands, could lead to impersonation, scams, and fake social media accounts.
The trend invites users to submit photos of themselves, sometimes alongside company logos or job details, and ask AI systems to create colorful caricatures based on what the chatbot “knows” about them. While the results can be entertaining, experts caution that sharing these images can reveal far more than participants realise.
“You are doing fraudsters’ work for them — giving them a visual representation of who you are,” said Bob Long, vice-president at age authentication company Daon. He added that the trend’s wording alone raises concerns, suggesting it could have been “intentionally started by a fraudster looking to make the job easy.”
When an image is uploaded, AI systems process it to extract data such as a person’s emotions, surroundings, or potentially location details, according to cybersecurity consultant Jake Moore. This information may then be stored indefinitely. Long said that uploaded images could also be used to train AI image generators as part of their datasets.
The potential consequences of data breaches are significant. Charlotte Wilson, head of enterprise at Israeli cybersecurity firm Check Point, said that if sensitive images fall into the wrong hands, criminals could use them to create realistic AI deepfakes, run scams, or establish fake social media accounts. “Selfies help criminals move from generic scams to personalised, high-conviction impersonation,” she said.
OpenAI’s privacy policy states that images may be used to improve the model, including training it. ChatGPT clarified that this does not mean every uploaded photo is stored in a public database, but patterns from user content may be used to refine how the system generates images.
Experts emphasise precautions for those wishing to participate. Wilson advised avoiding images that reveal identifying details. “Crop tightly, keep the background plain, and do not include badges, uniforms, work lanyards, location clues or anything that ties you to an employer or a routine,” she said. She also recommended avoiding personal information in prompts, such as job titles, city, or employer.
Moore suggested reviewing privacy settings before participating. OpenAI allows users to opt out of AI training for uploaded content via a privacy portal, and users can also disable text-based training by turning off the “improve the model for everyone” option. Under EU law, users can request the deletion of personal data, though OpenAI may retain some information to address security, fraud, and abuse concerns.
As AI trends continue to gain popularity, experts caution that even seemingly harmless images can carry significant risks. Proper precautions and awareness are essential for users to protect their personal information while engaging with new AI technologies.
Tech
US Government Designates Anthropic a Supply Chain Risk, Military Contractors Reconsider Use of Claude
The Trump administration has officially designated artificial intelligence company Anthropic as a supply chain risk, a move that could force government contractors to stop using its AI chatbot, Claude. The Pentagon said Thursday that it informed Anthropic leadership that the company and its products are now considered a supply chain threat, effective immediately.
The decision follows a standoff over Anthropic’s refusal to remove safety guardrails designed to prevent mass surveillance of Americans and the development of fully autonomous weapons. President Donald Trump and Defence Secretary Pete Hegseth had previously accused the company of endangering national security and threatened a series of penalties.
Anthropic CEO Dario Amodei responded that the designation is legally questionable and said the company plans to challenge it in court. He emphasised that the exceptions Claude enforces are limited to high-level use cases, not operational military decisions, and that prior discussions with the Pentagon had focused on maintaining access to Claude while establishing a smooth transition if required.
The Pentagon argued that restricting access to Claude could endanger warfighters. “The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk,” the department said. Trump has given the military six months to phase out the AI system, which is already embedded across multiple military and national security platforms.
Some defence contractors have already responded. Lockheed Martin said it will follow the Pentagon’s direction and seek other AI providers but does not anticipate major disruptions. Microsoft, whose lawyers studied the scope of the risk designation, said it can continue working with Anthropic on non-defence projects.
The move has drawn criticism from lawmakers and former officials. Senator Kirsten Gillibrand called the designation “a dangerous misuse of a tool meant to address adversary-controlled technology.” A letter signed by former defence and intelligence leaders, including former CIA director Michael Hayden, argued that applying supply chain rules to a domestic company is a “category error” and sets a troubling precedent. The letter stressed that such rules are meant to protect against foreign adversaries, not American innovators operating under the law.
Despite losing some defence contracts, Anthropic has seen a surge in consumer downloads over the past week, with more than a million people signing up for Claude daily. The app has surpassed OpenAI’s ChatGPT and Google’s Gemini in more than 20 countries’ Apple App Store rankings, reflecting public support for the company’s stance.
The dispute has also intensified Anthropic’s rivalry with OpenAI, whose CEO Sam Altman acknowledged that a recent military deal for ChatGPT in classified environments was rushed and required adjustments. Amodei expressed regret over an internal note he sent criticizing OpenAI and the Pentagon’s decision, apologizing for language that suggested the company was punished for not offering “dictator-like praise” to Trump.
The Pentagon’s designation of Anthropic as a supply chain risk marks an unprecedented escalation in the government’s effort to assert control over AI technologies used in national security, highlighting tensions between innovation, ethics, and military priorities.
Tech
US Military Cancels Anthropic AI Contract, Turns to OpenAI for Advanced Operations
The US military has ended its contract with Anthropic, the artificial intelligence company behind the Claude chatbot, after the firm refused to remove safety guardrails designed to prevent mass surveillance and autonomous weapon use. The Pentagon has now turned to OpenAI to integrate AI systems in classified operations.
Media reports have revealed that Anthropic’s Claude AI was previously used to support operations targeting leaders in Venezuela and Iran. The chatbot reportedly assisted in a January mission that led to the capture of Venezuelan President Nicolás Maduro and was later deployed during preparations for a planned operation related to Iran’s late supreme leader, Ayatollah Ali Khamenei.
Experts say these cases provide a rare look at how advanced AI is being incorporated into US military planning and intelligence. Heidy Khlaaf, chief AI scientist at the AI Now Institute, described the rapid deployment of these systems as surprising, noting that large language models are prone to producing unreliable or incorrect outputs, which raises concerns in high-stakes environments.
The reported use of Claude aligns with the Trump administration’s push to make the US military “AI-first,” aiming to ensure the United States maintains an edge over global rivals, including China. Various forms of automation and AI have been used by the US military since the 2010s, with previous deployments focusing on logistics, maintenance, and translation services, according to Elke Schwarz, professor of political theory at Queen Mary University of London.
The Pentagon’s AI Acceleration strategy seeks to integrate AI across multiple domains, including cyber and intelligence operations. As part of this effort, a database called genai.mil allows officials to access AI tools, including Google’s Gemini and xAI’s Grok. The 2025 defense budget, dubbed the “Big Beautiful Bill,” allocates hundreds of millions of dollars to AI-related projects, including counter-drone systems, AI ecosystem development, and nuclear security missions.
While Anthropic’s $200 million partnership with the military was intended as a two-year prototype to advance national security and mitigate adversarial AI risks, the company’s refusal to remove guardrails meant the contract was canceled. Claude had been deployed across US government networks, including nuclear labs and intelligence analysis tasks.
The Department of War now faces the challenge of transitioning to OpenAI’s systems. Analysts say the intelligence gathered by Claude will likely remain in use and may be incorporated into new AI tools. Experts also warn that increasing reliance on AI in military operations could raise ethical concerns, particularly regarding the development of autonomous weapons that could select and engage targets without human oversight.
Giorgos Verdi, a policy fellow at the European Council on Foreign Relations, noted that while AI currently assists with tasks such as analyzing satellite imagery, the US military’s push toward fully autonomous systems could escalate conflicts if rival nations adopt similar technology.
The Pentagon is expected to continue experimenting with AI in operations while balancing effectiveness with ethical and legal constraints, marking a pivotal moment in the integration of artificial intelligence into modern warfare.
Tech
China Leads Global Robotics Market as Europe Struggles to Keep Pace
Chinese firms are dominating the global robotics market, with humanoid robots taking center stage at the Chinese New Year celebrations in Hangzhou earlier this year. Germany’s Chancellor Friedrich Merz witnessed a live display of robots dancing, performing backflips, and boxing during his visit in February. On his return, Merz remarked that Germany was “simply no longer productive enough,” highlighting concerns about Europe’s competitiveness in robotics.
Hangzhou-based Unitree has emerged as a leading innovator, with China accounting for 87 percent of all humanoid robots delivered in 2025. While Unitree shipped more than 4,000 units, it remains behind Agibot, which sold over 5,000 units, according to Forbes. Despite relatively modest sales—just over 13,000 robots worldwide last year—investors continue to pour capital into the sector. Barclays research in January 2026 estimated that the global humanoid robotics market, currently valued at $2–3 billion, could reach $200 billion by 2035.
European startups face significant challenges in competing with their Chinese and American counterparts. Rodion Shishkov, founder of London-based construction technology firm All3, said European companies have far less access to capital. “Here in Europe I have to fight—literally, fight—for tens of millions of euros of investment, while a similarly-positioned company in the United States can obtain billions,” he said. Shishkov noted that functional non-humanoid robots, like those his company develops for construction, often receive less attention and funding than flashy humanoid models, despite being more practical in many applications.
Andrei Danescu, CEO of autonomous robotics and AI logistics startup Dexory, warned that Merz’s trip to China risked framing robotics competition as a “beauty contest,” focused on humanoid appearance rather than solving real-world problems. Danescu pointed to collaborative arms on factory floors, autonomous logistics vehicles in warehouses, and surgical assistants as examples of robots already transforming industries in Europe.
China’s sustained investment spans hardware, software, manufacturing integration, and full supply chains, making it difficult for other regions to catch up. Danescu called on European regulators to accelerate policies, clarify liability frameworks, and provide public funding to support strategic growth. “The AI Act is a start, but robotics needs its own focused attention—policy, funding, strategy,” he said.
Safety remains a major hurdle for integrating robots into existing workflows. Sam Baker, a former industrial robotics engineer turned venture investor, said there is a lack of clear standards for deploying robots alongside humans in construction and manufacturing. Some companies, such as BMW, are experimenting with humanoid robots in production lines in Leipzig, Germany, to explore their potential without committing fully to large-scale deployment.
Baker said Europe cannot achieve full independence from Chinese hardware supply chains, but sees opportunities in software, intelligence, and experimentation. “It is an excellent time to build a robotics business in Europe. There’s a lot of white space to be filled on the intelligence and data side,” he said, highlighting the scope for innovation despite hardware constraints.
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoSaudi Arabia: Foreign Direct Investment Rises by 5.6% in Q1
