Connect with us

Tech

Cybersecurity Experts Warn of Risks in AI Caricature Trend

Published

on

The latest AI-generated caricature trend, in which users upload images to chatbots like ChatGPT, could pose serious security risks, cybersecurity experts have warned. Images uploaded to AI chatbots could be retained for an unknown amount of time and, if in the wrong hands, could lead to impersonation, scams, and fake social media accounts.

The trend invites users to submit photos of themselves, sometimes alongside company logos or job details, and ask AI systems to create colorful caricatures based on what the chatbot “knows” about them. While the results can be entertaining, experts caution that sharing these images can reveal far more than participants realise.

“You are doing fraudsters’ work for them — giving them a visual representation of who you are,” said Bob Long, vice-president at age authentication company Daon. He added that the trend’s wording alone raises concerns, suggesting it could have been “intentionally started by a fraudster looking to make the job easy.”

When an image is uploaded, AI systems process it to extract data such as a person’s emotions, surroundings, or potentially location details, according to cybersecurity consultant Jake Moore. This information may then be stored indefinitely. Long said that uploaded images could also be used to train AI image generators as part of their datasets.

The potential consequences of data breaches are significant. Charlotte Wilson, head of enterprise at Israeli cybersecurity firm Check Point, said that if sensitive images fall into the wrong hands, criminals could use them to create realistic AI deepfakes, run scams, or establish fake social media accounts. “Selfies help criminals move from generic scams to personalised, high-conviction impersonation,” she said.

See also  AI Trends in 2026: World Models, Small Language Models, and Rising Concerns Over Safety and Regulation

OpenAI’s privacy policy states that images may be used to improve the model, including training it. ChatGPT clarified that this does not mean every uploaded photo is stored in a public database, but patterns from user content may be used to refine how the system generates images.

Experts emphasise precautions for those wishing to participate. Wilson advised avoiding images that reveal identifying details. “Crop tightly, keep the background plain, and do not include badges, uniforms, work lanyards, location clues or anything that ties you to an employer or a routine,” she said. She also recommended avoiding personal information in prompts, such as job titles, city, or employer.

Moore suggested reviewing privacy settings before participating. OpenAI allows users to opt out of AI training for uploaded content via a privacy portal, and users can also disable text-based training by turning off the “improve the model for everyone” option. Under EU law, users can request the deletion of personal data, though OpenAI may retain some information to address security, fraud, and abuse concerns.

As AI trends continue to gain popularity, experts caution that even seemingly harmless images can carry significant risks. Proper precautions and awareness are essential for users to protect their personal information while engaging with new AI technologies.

Tech

European executives warn AI growth is outpacing infrastructure, Nokia survey finds

Published

on

More than 1,000 business and technology leaders across Europe have raised serious concerns about the continent’s readiness to support the rapid expansion of artificial intelligence, according to a new study by Nokia. Executives identified energy supply, network capacity, and secure connectivity as the most pressing challenges that could slow the adoption of AI across industries.

The survey found that AI is already widely used by European companies, with 67% reporting that they have integrated the technology into their operations. Another 15% are running pilot projects, indicating that adoption is expected to grow significantly in the coming years. Many businesses see AI as essential for improving efficiency, automating processes, and strengthening innovation.

Cybersecurity emerged as the leading application area, with 63% of companies using AI to protect systems and data. Automation of business processes followed at 57%, while customer service tools such as chatbots and virtual assistants accounted for 55%. Companies are also using AI for product development, predictive analytics, robotics, and supply chain management.

Despite strong adoption, executives warned that infrastructure is struggling to keep pace with demand. Nokia’s report, titled “AI is too big for the European internet,” highlighted that Europe’s digital backbone is not yet equipped to handle large-scale AI workloads. The report noted that connectivity remains fragmented and security concerns persist, creating obstacles to expansion.

Energy supply was identified as the biggest constraint. About 87% of executives said they were worried that Europe’s energy infrastructure cannot meet rising AI demand. More than half said energy systems are already under strain or at risk. One in five companies reported delays to AI projects due to energy shortages, while others said they had to adjust project timelines or choose different locations because of limited power availability.

See also  European Commission Closes Better Regulation Consultation, Public Calls for Strong Impact Assessments

High electricity costs were also cited as a major concern, with 52% of executives saying Europe’s energy prices are not competitive compared to other regions. Limited grid capacity, slow approval processes, and restricted access to renewable energy sources were also highlighted as barriers.

As a result, 61% of executives said they are considering relocating data-intensive operations to regions with lower energy costs or have already taken steps in that direction. Only 16% said they plan to keep operations in Europe regardless of energy constraints.

Connectivity issues are also affecting companies. More than half reported network performance problems, including delays and downtime linked to increasing data traffic. Around 86% of executives expressed concern about internet reliability as AI usage continues to expand.

The report warned that global data traffic is expected to increase sharply by 2033, placing additional strain on existing networks. Business leaders called for greater investment in energy infrastructure, improved network capacity, and clearer regulations to support Europe’s ability to compete in the global AI race.

Continue Reading

Tech

Chile Launches Latam-GPT to Bring Latin America Its Own AI Model

Published

on

Chile has unveiled Latam-GPT, a Chilean-driven artificial intelligence (AI) project aimed at providing Latin America with a model trained on regional data, reducing bias, and giving the region a stronger presence in the global AI sector. The initiative, promoted by the National Centre for Artificial Intelligence (Cenia), received support from universities, foundations, libraries, government agencies, and civil society organisations across Chile, Uruguay, Brazil, Colombia, Mexico, Peru, Ecuador, and Argentina.

During a presentation on Televisión Nacional this week, Chilean President Gabriel Boric said Latam-GPT positions Latin America as an active participant in the global technology economy. “The region cannot simply be a passive user of AI systems developed elsewhere,” Boric said, noting that reliance on foreign models risks overlooking Latin America’s cultural heritage and traditions. Chilean Minister of Science Aldo Valle added that the project aims to break down prejudices and prevent the representation of Latin America from appearing homogeneous on the global stage.

Despite its name, Latam-GPT is not an interactive chat system. It is a large-scale database trained on information from the region and intended to serve as a foundation for developing technological applications tailored to local needs.

The project comes at a time when AI development remains concentrated in the United States, China, and Europe. Similar regional initiatives, such as SEA-LION in Southeast Asia and UlizaLlama in Africa, are also emerging to focus on local cultural contexts. To create Latam-GPT, developers collected over eight terabytes of regional data—equivalent to millions of books. The first version of the system has been hosted on Amazon Web Services, with plans to train it on a supercomputer to be installed at the University of Tarapacá in northern Chile during the first half of 2026. The investment for the supercomputer is expected to approach five million dollars, while initial funding of $550,000 came mainly from the Development Bank of Latin America (CAF) and contributions from partner institutions.

See also  Italy Enforces Strict Age Checks on Adult Websites as Europe Tightens Online Safety Rules

Alvaro Soto, director of Cenia, highlighted that most global AI models include only a small proportion of Latin American data. President Boric illustrated this by comparing the extensive information available on the siege of Calais with the limited coverage of key battles in Chilean independence, such as the siege of Chillán. Currently, the model’s content is mainly in Spanish and Portuguese, with plans to incorporate indigenous languages in the future.

Latam-GPT will be freely accessible and could support a range of local applications. Soto cited examples such as digital tools for hospitals to manage logistics and medical resources. One of the first companies to use the platform, Chile’s Digevo, plans to develop conversational AI for customer service in airlines and retail. Roberto Musso, Digevo’s director, said the system can understand local slang, idioms, and speech patterns, reducing bias present in global AI models.

Academic Alejandro Barros of the University of Chile cautioned that Latam-GPT may not compete with large international AI models due to differences in infrastructure and funding, but he acknowledged its potential to serve local needs and represent Latin America more accurately on the global stage.

Continue Reading

Tech

Nearly Half of Europeans Support Banning Social Media Platform X Over EU Rule Breaches

Published

on

A new survey across Germany, France, Spain, Italy, and Poland shows that nearly half of Europeans would support banning social media platform X from the European Union if it continues to break EU rules. Conducted by YouGov, the polling highlights rising frustration among EU citizens over what they perceive as the platform’s failure to comply with European digital regulations.

The survey found that between 60 and 78 percent of respondents in each country believe the EU should take stronger action against X if it does not address breaches identified by the European Commission last year. Of those in favour of further measures, a majority—ranging from 62 to 73 percent—said the platform should be banned if it refuses to comply. Overall, 47 percent of respondents backed a potential ban.

The European Commission fined X €120 million in December under the Digital Services Act (DSA) for failing to meet transparency obligations. Central to the investigation is the blue checkmark system, previously free to verify official accounts but now sold for €7 a month, which could mislead users about account authenticity. The Commission also found the platform did not meet transparency requirements for advertising, raising concerns that users could be exposed to financial scams. X has 90 working days to respond to the Commission’s findings.

Since the fine, the platform and its built-in AI assistant, Grok, have faced additional scrutiny. Critics argue that X amplifies harmful content, including deepfake pornography and child sexual abuse material. French prosecutors recently raided X’s Paris office as part of an ongoing investigation into child abuse content.

See also  UpScrolled Emerges as Ethical Social Media Alternative at Web Summit Qatar 2026

The YouGov survey indicates strong public support for tougher enforcement against large tech platforms. If X fails to comply with the Commission’s ruling, 70 percent of respondents said they would support consequences. Among these, 17 to 28 percent favoured further fines, 23 to 29 percent supported banning the platform outright, and the largest group—40 to 52 percent—wanted a combination of fines and a ban.

Ava Lee, executive director of People vs Big Tech, said the data shows Europeans are “done with empty warnings.” She added that X could set a precedent for how the EU enforces its rules on major technology companies.

Despite public support for tougher measures, banning a major social media platform would be considered an extreme step under EU law. The Commission has not indicated that it is currently considering such a move.

The survey comes amid wider debates in Europe over social media regulation. Several countries, including Spain, France, Italy, Germany, and the United Kingdom, are considering restrictions or outright bans on social media for minors, citing concerns over illegal or harmful content. Australia has already implemented strict rules for users under 16, but experts caution that enforcement challenges mean it is too early to judge the effectiveness of such bans.

Professor Kathryn Modecki from the University of Western Australia noted that many children continue to access banned apps through simple workarounds, suggesting policymakers should monitor results carefully before expanding similar restrictions elsewhere.

Continue Reading

Trending