Connect with us

Tech

Cybersecurity Experts Warn of Risks in AI Caricature Trend

Published

on

The latest AI-generated caricature trend, in which users upload images to chatbots like ChatGPT, could pose serious security risks, cybersecurity experts have warned. Images uploaded to AI chatbots could be retained for an unknown amount of time and, if in the wrong hands, could lead to impersonation, scams, and fake social media accounts.

The trend invites users to submit photos of themselves, sometimes alongside company logos or job details, and ask AI systems to create colorful caricatures based on what the chatbot “knows” about them. While the results can be entertaining, experts caution that sharing these images can reveal far more than participants realise.

“You are doing fraudsters’ work for them — giving them a visual representation of who you are,” said Bob Long, vice-president at age authentication company Daon. He added that the trend’s wording alone raises concerns, suggesting it could have been “intentionally started by a fraudster looking to make the job easy.”

When an image is uploaded, AI systems process it to extract data such as a person’s emotions, surroundings, or potentially location details, according to cybersecurity consultant Jake Moore. This information may then be stored indefinitely. Long said that uploaded images could also be used to train AI image generators as part of their datasets.

The potential consequences of data breaches are significant. Charlotte Wilson, head of enterprise at Israeli cybersecurity firm Check Point, said that if sensitive images fall into the wrong hands, criminals could use them to create realistic AI deepfakes, run scams, or establish fake social media accounts. “Selfies help criminals move from generic scams to personalised, high-conviction impersonation,” she said.

See also  AI Boom Exposes Global Talent Shortage as Investment Soars and Safety Concerns Mount

OpenAI’s privacy policy states that images may be used to improve the model, including training it. ChatGPT clarified that this does not mean every uploaded photo is stored in a public database, but patterns from user content may be used to refine how the system generates images.

Experts emphasise precautions for those wishing to participate. Wilson advised avoiding images that reveal identifying details. “Crop tightly, keep the background plain, and do not include badges, uniforms, work lanyards, location clues or anything that ties you to an employer or a routine,” she said. She also recommended avoiding personal information in prompts, such as job titles, city, or employer.

Moore suggested reviewing privacy settings before participating. OpenAI allows users to opt out of AI training for uploaded content via a privacy portal, and users can also disable text-based training by turning off the “improve the model for everyone” option. Under EU law, users can request the deletion of personal data, though OpenAI may retain some information to address security, fraud, and abuse concerns.

As AI trends continue to gain popularity, experts caution that even seemingly harmless images can carry significant risks. Proper precautions and awareness are essential for users to protect their personal information while engaging with new AI technologies.

Tech

European Nations Accelerate Military AI Integration Amid Rising Security Demands

Published

on

European countries are rapidly expanding the use of artificial intelligence in military operations, shifting from limited experimentation to integrating advanced AI systems into core defence strategies as governments respond to growing geopolitical tensions and evolving battlefield demands.

The latest development came this week when Germany and Ukraine launched the “Brave Germany” programme, a joint initiative expected to include around 5,000 AI-enabled medium-range strike drones. The agreement highlights Europe’s increasing focus on combining artificial intelligence with defence technology as nations seek faster decision-making, improved battlefield awareness and stronger deterrence capabilities.

Defence analysts say several European states, particularly Germany, France, the United Kingdom and Ukraine, are now leading efforts to incorporate AI into military planning, surveillance and weapons systems.

Artificial intelligence has already been used by European armed forces for more than a decade in areas such as logistics, maintenance and personnel management. Researchers say progress accelerated around 2015 as military planners recognised the growing potential of AI technologies.

According to experts at the Stockholm International Peace Research Institute, current investment is largely centred on two areas: semi-autonomous weapons systems and AI-assisted decision support systems. These technologies are designed to improve operational planning, battlefield management and tactical analysis while still leaving final decisions in human hands.

Germany has emerged as one of the most active countries in the sector. In recent years, Berlin signed agreements with Munich-based defence technology company Helsing to develop AI systems for the Future Combat Air System, Europe’s next-generation fighter jet programme. Germany has also partnered with Swedish defence firm Saab to integrate AI into Eurofighter electronic warfare systems.

See also  Cambridge Index Reveals Global Black Market for Fake Social Media Verifications

Another major contract worth €269 million will allow Helsing to manufacture AI-enabled loitering munitions, commonly known as kamikaze drones, for German and NATO forces.

The United Kingdom has also expanded its AI ambitions through the Asgard programme, introduced in 2025. The project combines reconnaissance systems, sensors, weapons and AI-supported decision tools aimed at improving battlefield coordination and response times.

Britain has also strengthened ties with American software company Palantir Technologies, which pledged investments of up to £1.5 billion to support AI development in the country.

France, meanwhile, is focusing on building independent European AI capabilities. Paris has partnered with French AI firm Mistral AI to provide advanced AI models and software for military and public sector use, reflecting broader European efforts to reduce reliance on American technology companies.

European Union institutions are also moving forward with AI defence projects under the European Defence Fund. Recent funding rounds included projects involving sovereign European AI support systems, military-focused large language models and AI-assisted artillery technologies.

Despite rapid progress, analysts warn that Europe still faces challenges in turning plans into operational systems quickly enough. Defence experts say bureaucratic procedures and slower political decision-making could delay deployment as other global powers continue to expand their military AI capabilities at a faster pace.

Continue Reading

Tech

Researchers Warn AI Systems Can Now Replicate and Spread Across Computers

Published

on

A new study by US-based cybersecurity group Palisade Research has raised concerns about the growing capabilities of advanced artificial intelligence systems after researchers demonstrated that some AI models were able to copy themselves, spread to other computers, and continue launching attacks without human assistance.

The experiment, described by researchers as the first known example of autonomous AI self-replication, tested several leading AI systems from OpenAI, Anthropic and Alibaba.

According to the research team, the tests were conducted in a controlled environment using computers intentionally fitted with security weaknesses. The AI models were connected to custom-built software that allowed them to interact directly with computer systems and execute commands beyond the capabilities of ordinary chatbot interfaces.

Researchers instructed some systems to copy themselves directly, while others were asked to break into vulnerable computers and install another AI model. Since OpenAI and Anthropic do not publicly release the core files behind their commercial systems, the tests involving those companies used Alibaba’s Qwen models as the transferable software.

The AI systems were required to locate security flaws, gain access to computers, steal login credentials, move necessary files, and create functioning copies of themselves on new machines. Once installed, the copied AI could continue attacking additional systems independently.

One of the most striking results involved Alibaba’s Qwen3.6-27B model, which successfully spread itself across four computers located in Canada, the United States, Finland and India after receiving only a single prompt. Researchers said the process took just under three hours and required no further human involvement.

The experiment was halted after three stages, though the final copy remained operational and theoretically capable of continuing the process.

See also  AI Boom Drives Surge in Carbon Emissions Among Big Tech Firms, UN Report Warns

Success rates varied between models. Researchers reported that Anthropic’s Claude Opus 4.6 completed the tasks in 81 percent of attempts, while OpenAI’s GPT-5.4 succeeded in roughly one-third of tests. Different versions of Alibaba’s Qwen models achieved success rates ranging from 19 percent to 33 percent.

The findings arrive amid wider debate over the risks posed by increasingly capable AI systems. Last month, Anthropic announced that it would not publicly release a version of its Claude Mythos Preview model, describing it as too dangerous because of its potential use in sophisticated cyberattacks.

Security experts have long warned that self-replicating systems could become difficult to contain if deployed maliciously. Traditional computer viruses can already copy themselves, but researchers said this experiment demonstrated AI systems making independent decisions to exploit vulnerabilities and continue spreading.

Despite the results, the researchers stressed that the study took place under tightly controlled conditions with deliberately weakened security systems. They noted that real-world networks often include monitoring tools and protections designed to block such attacks.

Still, the team said the experiment showed that autonomous AI self-replication can no longer be viewed as a theoretical possibility, but as a capability that now exists in practice.

Continue Reading

Tech

AI Study Raises Privacy Questions After Chat Data Reveals Personality Traits

Published

on

A new study by researchers at ETH Zurich has found that artificial intelligence can predict key personality traits by analyzing conversations people have with chatbots, raising fresh concerns about privacy and the growing amount of personal data shared online.

The research, published as a pre-print study, examined whether AI systems could identify psychological characteristics from user interactions with chat platforms such as ChatGPT. Researchers collected chat histories from 668 users in the United States and the United Kingdom who agreed to share their conversations for the project.

More than 62,000 chat exchanges were analyzed and grouped according to topic and communication style. Using this information, researchers trained an AI model to estimate whether users displayed traits linked to the “Big Five” personality categories commonly used in psychology: agreeableness, conscientiousness, emotional stability, extraversion, and openness.

Participants also completed standard psychological assessments so researchers could compare the AI’s predictions with established personality test results.

According to the study, the AI model identified some personality traits with accuracy levels reaching 61 percent. The system performed best when predicting agreeableness and emotional stability, while it struggled more with conscientiousness.

Researchers said the accuracy improved when the AI had access to longer conversation histories, suggesting that extensive chatbot use may reveal more about an individual’s personality over time.

The findings add to growing debate over how personal information shared with AI systems could be used in the future. While researchers said the immediate risks for individuals appear limited, they warned that large-scale collection of personality data could create broader concerns.

See also  Cambridge Index Reveals Global Black Market for Fake Social Media Verifications

The study noted that such information could potentially be exploited in targeted advertising campaigns, political messaging, or disinformation efforts designed to influence specific groups of people.

Researchers also warned that users may not fully realize how much personal insight can be extracted from ordinary conversations with AI assistants. Topics people discuss, the language they use, and the emotional tone of messages can all provide clues about behavior and personality patterns.

The team said the results could help developers create stronger privacy protections for AI systems. One proposal involves building tools that automatically remove identifying or sensitive details from conversations before they are stored or analyzed.

As AI chatbots become increasingly common in workplaces, education, and daily life, the study is expected to fuel further discussion over how companies collect, store, and protect user data in rapidly expanding digital environments.

Continue Reading

Trending