Connect with us

Tech

Cybersecurity Experts Warn of Risks in AI Caricature Trend

Published

on

The latest AI-generated caricature trend, in which users upload images to chatbots like ChatGPT, could pose serious security risks, cybersecurity experts have warned. Images uploaded to AI chatbots could be retained for an unknown amount of time and, if in the wrong hands, could lead to impersonation, scams, and fake social media accounts.

The trend invites users to submit photos of themselves, sometimes alongside company logos or job details, and ask AI systems to create colorful caricatures based on what the chatbot “knows” about them. While the results can be entertaining, experts caution that sharing these images can reveal far more than participants realise.

“You are doing fraudsters’ work for them — giving them a visual representation of who you are,” said Bob Long, vice-president at age authentication company Daon. He added that the trend’s wording alone raises concerns, suggesting it could have been “intentionally started by a fraudster looking to make the job easy.”

When an image is uploaded, AI systems process it to extract data such as a person’s emotions, surroundings, or potentially location details, according to cybersecurity consultant Jake Moore. This information may then be stored indefinitely. Long said that uploaded images could also be used to train AI image generators as part of their datasets.

The potential consequences of data breaches are significant. Charlotte Wilson, head of enterprise at Israeli cybersecurity firm Check Point, said that if sensitive images fall into the wrong hands, criminals could use them to create realistic AI deepfakes, run scams, or establish fake social media accounts. “Selfies help criminals move from generic scams to personalised, high-conviction impersonation,” she said.

See also  Chile Launches Latam-GPT to Bring Latin America Its Own AI Model

OpenAI’s privacy policy states that images may be used to improve the model, including training it. ChatGPT clarified that this does not mean every uploaded photo is stored in a public database, but patterns from user content may be used to refine how the system generates images.

Experts emphasise precautions for those wishing to participate. Wilson advised avoiding images that reveal identifying details. “Crop tightly, keep the background plain, and do not include badges, uniforms, work lanyards, location clues or anything that ties you to an employer or a routine,” she said. She also recommended avoiding personal information in prompts, such as job titles, city, or employer.

Moore suggested reviewing privacy settings before participating. OpenAI allows users to opt out of AI training for uploaded content via a privacy portal, and users can also disable text-based training by turning off the “improve the model for everyone” option. Under EU law, users can request the deletion of personal data, though OpenAI may retain some information to address security, fraud, and abuse concerns.

As AI trends continue to gain popularity, experts caution that even seemingly harmless images can carry significant risks. Proper precautions and awareness are essential for users to protect their personal information while engaging with new AI technologies.

Tech

Campaign Highlights Growing Concern Over Declining Quality of Digital Platforms

Published

on

A viral campaign led by the Norwegian Consumer Council has sparked global debate over what critics describe as the steady decline in the quality of popular digital platforms.

A widely shared video produced by the group features a self-described “professional enshitificator” adding intrusive pop-ups to websites, inserting extra advertisements into YouTube videos and triggering disruptive software updates. The video, which has drawn millions of views, is part of a broader effort to highlight the concept known as “enshitification.”

A platform becomes ‘enshitified’ when it introduces paid features or subscriptions that makes a user’s experience worse than it used to be. The term was first coined in 2023 by journalist Cory Doctorow, who argued that digital services often begin by prioritising users before gradually shifting toward profit-driven practices that degrade the experience.

According to the Norwegian Consumer Council, this trend is increasingly visible across major platforms. Over 70 advocacy groups from the United States, Europe and Norway have written to policymakers in more than 14 countries, urging stronger action to protect consumers and curb what they describe as anti-competitive behaviour.

The group’s analysis points to platforms such as Facebook as examples of how services evolve. Originally designed to connect friends and family, the platform now prioritises advertising and promoted content, often interrupting user activity with sponsored posts and algorithm-driven material.

Experts say the problem is tied to how digital markets operate. Finn Lützow-Holm Myrstad, the council’s director of digital policy, said companies are able to introduce these changes because users have limited alternatives. “It’s a deliberate process,” he said, noting that once users are locked into a platform, switching becomes difficult.

See also  Meta Funds UK Government AI Fellowship with $1 Million Grant to Build Public Sector Tools

Economists highlight the role of the “network effect,” where a platform becomes more valuable as more people use it. This makes users reluctant to leave, even if the service declines. At the same time, companies introduce switching costs, such as data loss or the effort required to rebuild connections elsewhere, further discouraging migration.

Industry analysts also point to reduced competition following major acquisitions, including Meta Platforms’ purchase of Instagram, as a factor that has allowed platforms to prioritise revenue over user experience.

Regulators in Europe have introduced measures aimed at addressing these concerns. The Digital Markets Act seeks to open up dominant platforms to competition, while the Digital Services Act requires companies to assess risks and improve transparency. However, experts warn that enforcement has been slow and penalties insufficient to deter harmful practices.

Advocates are now calling for stronger rules, including proposed legislation such as the Digital Fairness Act, to address deceptive design and addictive features.

While digital platforms remain central to communication, commerce and entertainment, the campaign underscores growing frustration among users and calls for a shift toward services that prioritise transparency, competition and consumer rights.

Continue Reading

Tech

Study Finds Chatbots May Encourage Harmful Behaviour by Excessively Agreeing with Users

Published

on

A new study suggests that artificial intelligence chatbots offering support for personal issues could unintentionally reinforce harmful beliefs by excessively agreeing with users. Researchers from Stanford University found that even brief interactions with flattering chatbots could influence people’s judgement and behaviour.

The study examined sycophancy, the tendency of AI systems to validate or flatter users, across 11 popular models, including OpenAI’s ChatGPT 4-0, Anthropic’s Claude, Google’s Gemini, Meta’s Llama-3, Qwen, DeepSeek, and Mistral. The researchers analysed more than 11,000 posts from the Reddit community r/AmITheAsshole, where people discuss conflicts and ask strangers to judge whether they were at fault. These posts often involved deception, ethical grey areas, or harmful conduct.

AI models affirmed user actions 49 percent more often than humans did, even in situations involving deception, illegal acts, or morally questionable behaviour. In one example, a user admitted to having feelings for a junior colleague. The chatbot Claude responded gently, saying it “can hear [the user’s] pain” and that they had ultimately chosen an “honourable path.” Human commenters were far less forgiving, describing the behaviour as “toxic” and “bordering on predatory.”

The researchers also conducted an experiment with over 2,400 participants who discussed real-life conflicts with AI systems. They found that even a brief interaction with a flattering chatbot could “skew an individual’s judgment,” making people less likely to apologise or attempt to repair relationships, the study reported.

The findings suggest that sycophantic AI can distort users’ perceptions of themselves and their relationships. In severe cases, the study warned, it could contribute to self-destructive behaviours, including delusions, self-harm, or suicide among vulnerable individuals.

See also  Report reveals AI-generated videos of children circulating on TikTok, linked to illegal content on Telegram

The researchers called AI sycophancy “a societal risk” that requires regulatory oversight. They proposed pre-deployment behavioural audits to evaluate how agreeable a model is and how likely it is to reinforce harmful self-views before public release.

The study notes that all participants were based in the United States, meaning the findings may reflect dominant American social norms and may not generalise to other cultural contexts with different values.

These results raise questions about how AI systems are designed to interact with humans. Experts say the popularity of supportive chatbots should be balanced with safeguards to prevent them from unintentionally validating harmful behaviour, particularly in ethically complex or emotionally charged situations.

Continue Reading

Tech

EU Launches Investigation into Snapchat Over Minors’ Safety

Published

on

The European Commission has opened a formal investigation into Snapchat amid concerns that the platform may expose minors to grooming, criminal recruitment, and other risks, potentially violating EU digital safety laws. The Commission suspects that adults may masquerade as young users on the platform to recruit minors for illegal activities or to exploit them sexually.

“With this investigation, we will closely look into their compliance with our legislation,” a Commission spokesperson said. The probe falls under the EU Digital Services Act (DSA) and follows a review of Snapchat’s risk assessments from 2023 to 2025, as well as additional information received last October regarding age verification and potentially harmful content.

The Commission’s announcement marks the start of formal proceedings, which could result in further enforcement measures. Snapchat may also respond by proposing changes to its policies and practices to improve safety for young users. Snap Inc., the parent company, did not immediately respond to requests for comment.

The investigation will examine five key areas: age verification, grooming and recruitment of minors for criminal activities, default account settings, dissemination of information on banned products, and reporting of illegal content. Officials are particularly concerned that Snapchat users might access illegal goods, such as drugs, vapes, and alcohol, due to insufficient content moderation. The Netherlands Authority for Consumers and Markets (ACM) launched a similar probe into the sale of vape products on Snapchat last September, which the European Commission will now incorporate into its broader investigation.

The Commission also flagged potential flaws in reporting mechanisms for illegal content, suggesting that users may find them difficult to access or confusing to use. Investigators noted that Snapchat may employ “dark patterns,” or design elements intended to trick users into making choices they would not otherwise make.

See also  As AI Hype Fades, Analysts Say ‘Boring’ Tools May Last Longer Online

Snapchat relies on users self-disclosing their age to create an account, which the Commission says is insufficient to protect children under 13. The platform offers “teen” accounts for 13-to-17-year-olds with additional safeguards, including private default settings and the requirement for users to opt in to location sharing through “Snap Map.” Despite these measures, the Commission says that age-appropriate experiences may not always be activated correctly, leaving minors with default settings that do not provide adequate privacy, safety, or security protections.

The European Commission will closely monitor how Snapchat addresses these concerns, with the investigation focusing on whether the platform adequately informs young users about privacy and safety features and how to adjust them.

This investigation underscores the EU’s growing focus on digital safety and the responsibilities of social media companies to protect minors online.

Continue Reading

Trending