Tech
Report Questions Evidence Behind AI Industry’s Climate Claims
A new report by German non-profit Beyond Fossil Fuels has raised concerns about the strength of evidence supporting claims that artificial intelligence can significantly reduce global carbon emissions.
The group reviewed more than 150 climate-related statements made by leading AI companies and organisations, including the International Energy Agency. It found that only 26 per cent of the claims cited published academic research, while 36 per cent did not reference any evidence at all. The remaining claims relied on corporate reports, media coverage, NGO publications or unpublished academic work.
According to the report, many corporate sources lack peer-reviewed data or primary research to substantiate their projections. “The evidence for massive climate benefits of AI is weak, whilst the evidence of substantial harm is strong,” the authors wrote.
Estimates of AI’s environmental footprint vary widely. A January study published in the journal Patterns suggested that data centres alone may have emitted between 32.6 million and 79.7 million tonnes of carbon dioxide in 2025, roughly comparable to the annual emissions of a small European country.
By contrast, the International Energy Agency has argued that AI could cut global emissions by up to 5 per cent by 2035 by accelerating innovation in the energy sector. The agency has pointed to applications such as testing new battery chemistries and materials for solar power as examples of how AI might support cleaner technologies.
Beyond Fossil Fuels examined high-profile industry claims, including a projection cited by Google that AI could reduce global greenhouse gas emissions by 5 to 10 per cent by 2030 if widely adopted. The report traced the estimate back to a 2021 blog post by consulting firm Boston Consulting Group, which based the figure on client experience rather than peer-reviewed global analysis. Researchers described the claim as an extrapolation built on limited evidence.
The report also reviewed assertions that smaller, narrowly trained AI models are more environmentally efficient. It concluded that there is insufficient peer-reviewed research demonstrating that such systems can deliver measurable emissions reductions at scale.
In addition, the analysis said it found no verified example of generative AI systems such as OpenAI’s ChatGPT, Google’s Gemini or Microsoft’s Copilot producing substantial, measurable emissions cuts. Even if certain efficiencies exist, the report argues that they may be outweighed by the rapid expansion in energy use linked to data centre growth.
The authors said their findings do not suggest AI lacks climate benefits altogether, but they contend there is limited evidence that current applications can offset the sector’s growing energy demands. Requests for comment were sent to major AI firms and the International Energy Agency.
Tech
Cybersecurity Experts Warn of Risks in AI Caricature Trend
The latest AI-generated caricature trend, in which users upload images to chatbots like ChatGPT, could pose serious security risks, cybersecurity experts have warned. Images uploaded to AI chatbots could be retained for an unknown amount of time and, if in the wrong hands, could lead to impersonation, scams, and fake social media accounts.
The trend invites users to submit photos of themselves, sometimes alongside company logos or job details, and ask AI systems to create colorful caricatures based on what the chatbot “knows” about them. While the results can be entertaining, experts caution that sharing these images can reveal far more than participants realise.
“You are doing fraudsters’ work for them — giving them a visual representation of who you are,” said Bob Long, vice-president at age authentication company Daon. He added that the trend’s wording alone raises concerns, suggesting it could have been “intentionally started by a fraudster looking to make the job easy.”
When an image is uploaded, AI systems process it to extract data such as a person’s emotions, surroundings, or potentially location details, according to cybersecurity consultant Jake Moore. This information may then be stored indefinitely. Long said that uploaded images could also be used to train AI image generators as part of their datasets.
The potential consequences of data breaches are significant. Charlotte Wilson, head of enterprise at Israeli cybersecurity firm Check Point, said that if sensitive images fall into the wrong hands, criminals could use them to create realistic AI deepfakes, run scams, or establish fake social media accounts. “Selfies help criminals move from generic scams to personalised, high-conviction impersonation,” she said.
OpenAI’s privacy policy states that images may be used to improve the model, including training it. ChatGPT clarified that this does not mean every uploaded photo is stored in a public database, but patterns from user content may be used to refine how the system generates images.
Experts emphasise precautions for those wishing to participate. Wilson advised avoiding images that reveal identifying details. “Crop tightly, keep the background plain, and do not include badges, uniforms, work lanyards, location clues or anything that ties you to an employer or a routine,” she said. She also recommended avoiding personal information in prompts, such as job titles, city, or employer.
Moore suggested reviewing privacy settings before participating. OpenAI allows users to opt out of AI training for uploaded content via a privacy portal, and users can also disable text-based training by turning off the “improve the model for everyone” option. Under EU law, users can request the deletion of personal data, though OpenAI may retain some information to address security, fraud, and abuse concerns.
As AI trends continue to gain popularity, experts caution that even seemingly harmless images can carry significant risks. Proper precautions and awareness are essential for users to protect their personal information while engaging with new AI technologies.
Tech
European executives warn AI growth is outpacing infrastructure, Nokia survey finds
More than 1,000 business and technology leaders across Europe have raised serious concerns about the continent’s readiness to support the rapid expansion of artificial intelligence, according to a new study by Nokia. Executives identified energy supply, network capacity, and secure connectivity as the most pressing challenges that could slow the adoption of AI across industries.
The survey found that AI is already widely used by European companies, with 67% reporting that they have integrated the technology into their operations. Another 15% are running pilot projects, indicating that adoption is expected to grow significantly in the coming years. Many businesses see AI as essential for improving efficiency, automating processes, and strengthening innovation.
Cybersecurity emerged as the leading application area, with 63% of companies using AI to protect systems and data. Automation of business processes followed at 57%, while customer service tools such as chatbots and virtual assistants accounted for 55%. Companies are also using AI for product development, predictive analytics, robotics, and supply chain management.
Despite strong adoption, executives warned that infrastructure is struggling to keep pace with demand. Nokia’s report, titled “AI is too big for the European internet,” highlighted that Europe’s digital backbone is not yet equipped to handle large-scale AI workloads. The report noted that connectivity remains fragmented and security concerns persist, creating obstacles to expansion.
Energy supply was identified as the biggest constraint. About 87% of executives said they were worried that Europe’s energy infrastructure cannot meet rising AI demand. More than half said energy systems are already under strain or at risk. One in five companies reported delays to AI projects due to energy shortages, while others said they had to adjust project timelines or choose different locations because of limited power availability.
High electricity costs were also cited as a major concern, with 52% of executives saying Europe’s energy prices are not competitive compared to other regions. Limited grid capacity, slow approval processes, and restricted access to renewable energy sources were also highlighted as barriers.
As a result, 61% of executives said they are considering relocating data-intensive operations to regions with lower energy costs or have already taken steps in that direction. Only 16% said they plan to keep operations in Europe regardless of energy constraints.
Connectivity issues are also affecting companies. More than half reported network performance problems, including delays and downtime linked to increasing data traffic. Around 86% of executives expressed concern about internet reliability as AI usage continues to expand.
The report warned that global data traffic is expected to increase sharply by 2033, placing additional strain on existing networks. Business leaders called for greater investment in energy infrastructure, improved network capacity, and clearer regulations to support Europe’s ability to compete in the global AI race.
Tech
Chile Launches Latam-GPT to Bring Latin America Its Own AI Model
Chile has unveiled Latam-GPT, a Chilean-driven artificial intelligence (AI) project aimed at providing Latin America with a model trained on regional data, reducing bias, and giving the region a stronger presence in the global AI sector. The initiative, promoted by the National Centre for Artificial Intelligence (Cenia), received support from universities, foundations, libraries, government agencies, and civil society organisations across Chile, Uruguay, Brazil, Colombia, Mexico, Peru, Ecuador, and Argentina.
During a presentation on Televisión Nacional this week, Chilean President Gabriel Boric said Latam-GPT positions Latin America as an active participant in the global technology economy. “The region cannot simply be a passive user of AI systems developed elsewhere,” Boric said, noting that reliance on foreign models risks overlooking Latin America’s cultural heritage and traditions. Chilean Minister of Science Aldo Valle added that the project aims to break down prejudices and prevent the representation of Latin America from appearing homogeneous on the global stage.
Despite its name, Latam-GPT is not an interactive chat system. It is a large-scale database trained on information from the region and intended to serve as a foundation for developing technological applications tailored to local needs.
The project comes at a time when AI development remains concentrated in the United States, China, and Europe. Similar regional initiatives, such as SEA-LION in Southeast Asia and UlizaLlama in Africa, are also emerging to focus on local cultural contexts. To create Latam-GPT, developers collected over eight terabytes of regional data—equivalent to millions of books. The first version of the system has been hosted on Amazon Web Services, with plans to train it on a supercomputer to be installed at the University of Tarapacá in northern Chile during the first half of 2026. The investment for the supercomputer is expected to approach five million dollars, while initial funding of $550,000 came mainly from the Development Bank of Latin America (CAF) and contributions from partner institutions.
Alvaro Soto, director of Cenia, highlighted that most global AI models include only a small proportion of Latin American data. President Boric illustrated this by comparing the extensive information available on the siege of Calais with the limited coverage of key battles in Chilean independence, such as the siege of Chillán. Currently, the model’s content is mainly in Spanish and Portuguese, with plans to incorporate indigenous languages in the future.
Latam-GPT will be freely accessible and could support a range of local applications. Soto cited examples such as digital tools for hospitals to manage logistics and medical resources. One of the first companies to use the platform, Chile’s Digevo, plans to develop conversational AI for customer service in airlines and retail. Roberto Musso, Digevo’s director, said the system can understand local slang, idioms, and speech patterns, reducing bias present in global AI models.
Academic Alejandro Barros of the University of Chile cautioned that Latam-GPT may not compete with large international AI models due to differences in infrastructure and funding, but he acknowledged its potential to serve local needs and represent Latin America more accurately on the global stage.
-
Entertainment1 year agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoSaudi Arabia: Foreign Direct Investment Rises by 5.6% in Q1
