Tech
Google Reveals Energy and Water Use of AI Prompts in New Study
Google has disclosed new details about the environmental footprint of its artificial intelligence chatbot Gemini, saying each text prompt consumes only a fraction of energy and water compared with earlier public estimates.
According to a technical paper and accompanying blog post released by the company, a single text query on Gemini uses about 0.24 watt-hours (Wh) of energy — roughly equivalent to watching nine seconds of television. That consumption, Google says, translates to about 0.03 grams of carbon dioxide emissions. In addition, each query requires around 0.26 millilitres of water, or approximately five drops, largely used in cooling data centre equipment.
The company stressed that its measurements accounted not only for the power consumed by the chips running Gemini but also the energy used by IT equipment in data centres, idle chip power, and water for cooling systems. By including these factors, Google argued, its estimates provide a more accurate picture of environmental impact than many existing studies.
“Per-prompt emissions are quite small,” the blog post noted, adding that the company’s figures show energy and water usage to be “substantially lower than many public estimates.”
The announcement comes as concerns grow about the rising energy demands of advanced computing. The International Energy Agency (IEA) recently projected that electricity demand from data centres, AI, and cryptocurrency could double by 2030, with AI alone expected to consume up to 945 terawatt-hours annually — nearly equivalent to Japan’s current power use.
Comparisons between Gemini and other platforms highlight stark differences. A study by the Electric Power Research Institute estimated that a prompt issued to OpenAI’s ChatGPT consumes 2.9 Wh of energy, nearly ten times Google’s figure. By contrast, a traditional internet search requires about 0.3 Wh.
Despite these relatively low per-query figures, Google’s overall emissions have surged in recent years. Its latest environmental report showed emissions up 51 percent since 2019, driven largely by the production and assembly of hardware needed to support AI technology. The company acknowledged that upstream supply chain activities are contributing significantly to its carbon footprint.
At the same time, Google said efficiency improvements are underway. The company claims that since August 2024, energy use and carbon emissions per Gemini prompt have fallen 33-fold and 44-fold respectively, reflecting advances in hardware and software optimization.
However, analysts note that the company’s data leaves key questions unanswered. While per-query emissions are modest, Google has not disclosed the total number of Gemini prompts processed daily. Without those figures, the full scale of the chatbot’s energy demand remains unclear.
As AI adoption accelerates worldwide, the debate over its environmental costs is intensifying. Google’s new disclosures suggest progress in efficiency but also underscore the challenge of balancing technological innovation with sustainability.
Tech
European Journalist Suspended for Using AI-Generated Fake Quotes
Journalist Peter Vandermeersch, who worked with Dutch publisher Mediahuis, reportedly fabricated expert quotes into 15 of 53 articles written for them. Vandermeersch, a senior European journalist, has been temporarily suspended after an investigation revealed he published quotes generated by artificial intelligence (AI) as if they were genuine.
The Dutch newspaper NRC reported that Vandermeersch inserted “dozens” of fabricated quotes into articles published on two Mediahuis websites. Some of the statements attributed to experts could not be found in the sources Vandermeersch cited, including news articles and scientific studies. Seven of the individuals whose quotes were used confirmed they had never made the statements attributed to them.
Vandermeersch served as chief executive of Mediahuis Ireland from 2022 to 2025 before taking on a fellowship role in journalism and society at Mediahuis. He confirmed his temporary suspension on his blog, saying he relied on AI tools including ChatGPT, Perplexity, and Google’s Notebook to summarise lengthy reports, trusting the outputs to be accurate.
Instead, the systems generated fabricated quotes that “put words into people’s mouths,” Vandermeersch admitted. “That was not just careless, it was wrong,” he wrote. “It is particularly painful that I made precisely the mistake I have repeatedly warned colleagues about: these language models are so good that they produce irresistible quotes you are tempted to use as an author.”
Vandermeersch said he first discovered the issue last year, when two of his articles were found to contain AI-generated quotes. He did not correct the errors at the time, which allowed the problem to persist. “When I realised this a few months ago, my enthusiasm diminished, as did my use of AI,” he said.
He explained that he continues to use AI for tasks such as translation, generating ideas, creating headlines, and developing story angles, but with “far less naive trust than before.” Mediahuis has yet to announce any further disciplinary measures or whether it will retract the affected articles.
The case has raised fresh concerns about the use of AI in journalism, highlighting the risks of relying on automated systems to generate content without verification. Industry experts warn that while AI tools can be valuable for research and drafting, uncritical use can lead to serious ethical breaches, including the misrepresentation of sources.
Mediahuis said it takes the matter seriously and is reviewing editorial procedures to prevent similar incidents in the future. The scandal has sparked a wider discussion in European media about the ethical boundaries of AI in reporting, particularly when it comes to quoting real people.
The incident underscores the growing tension between technological convenience and journalistic integrity, as newsrooms across Europe experiment with AI tools while balancing accuracy and accountability.
Tech
Cyberattacks Intensify as Iran Conflict Spills Into Digital Domain
State-linked and hacktivist groups have claimed a series of cyberattacks against the United States and Israel since the war with Iran began, marking a significant escalation in the digital dimension of the conflict.
One of the most notable incidents involved Stryker, which confirmed on March 11 that a cyberattack had disrupted its global network. According to reports, employees encountered the logo of Handala, an إيران-linked hacking group, on login pages across the company’s systems. The breach reportedly targeted the firm’s Microsoft-based infrastructure, though the full extent of the disruption remains unclear.
Handala has claimed responsibility for the attack, stating it exploited cloud management systems to remotely wipe large numbers of devices worldwide. The group said the operation was carried out in retaliation for a missile strike in Iran. Independent verification of these claims is still pending.
Cybersecurity analysts say the attack is part of a broader campaign by groups linked to Iran’s security apparatus. According to findings from CloudSek, organisations associated with the Islamic Revolutionary Guard Corps have targeted US critical infrastructure. These include CyberAv3ngers, APT33 and APT55, which are accused of attempting to infiltrate industrial systems such as power grids and water facilities.
Experts say some of these groups use simple methods, including default passwords, to access systems, while others deploy malware aimed at disrupting operations or gathering intelligence. Additional networks linked to Iran’s Ministry of Intelligence have also been active, targeting telecommunications, energy companies and government organisations.
At the same time, the United States and Israel are conducting their own cyber operations. General Dan Caine said US Cyber Command played a key role early in the conflict, disrupting Iranian communications and sensor networks. Defence Secretary Pete Hegseth confirmed that artificial intelligence and cyber tools are being used alongside conventional military operations.
Israeli intelligence has also reportedly relied on hacked data to support military planning, highlighting the growing role of cyber capabilities in modern warfare.
Hacktivist activity has surged as well. More than 60 groups formed a loose coalition known as the Cyber Islamic Resistance, coordinating attacks through online platforms. These groups have claimed hundreds of operations, including attempts to disrupt Israeli infrastructure and private sector systems. Analysts warn that such actors are often less restrained and may pose risks to civilian networks.
The conflict has also drawn in groups from outside the region, including actors based in Iraq, Russia and other parts of the Middle East. Some have targeted government websites and transport infrastructure, while pro-Israeli groups have carried out retaliatory attacks against Iranian entities.
Security experts say the growing scale and coordination of cyber operations reflect a shift in how modern conflicts are fought, with digital attacks now running parallel to military action on the ground.
Tech
Study Finds Hormone-Disrupting Chemicals in Popular Headphones Sold Across Europe
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoSaudi Arabia: Foreign Direct Investment Rises by 5.6% in Q1
