Tech
Google Removes Some AI Health Summaries After Accuracy Concerns
Google has reportedly removed certain AI-generated summaries for health-related searches after an investigation found that some of the information provided could be misleading.
The summaries, known as AI Overviews, appear at the top of search results and are designed to provide concise answers to user questions. A report by the Guardian newspaper found that several AI Overviews contained inaccurate health information, raising concerns about potential harm to users.
The investigation highlighted cases where the AI supplied numbers with little context in response to queries such as “what is the normal range for liver blood tests?” and “what is the normal range for liver function tests?” The results did not account for differences based on age, sex, ethnicity, or nationality. In some cases, Google’s AI extracted data from Max Healthcare, an Indian hospital chain in New Delhi, rather than providing verified global medical guidance.
Featured snippets, which also appear at the top of Google search results, differ from AI Overviews because they extract existing text from relevant websites rather than generating new content. However, the Guardian noted that even variations of liver test queries, such as “[liver function test] lft reference range,” continued to produce AI-generated summaries. Liver function tests measure proteins and enzymes in the blood to evaluate how well the liver is performing.
In one example, Google’s AI reportedly advised pancreatic cancer patients to avoid high-fat foods. Experts told the Guardian that such guidance could be dangerous, potentially increasing the risk of mortality among patients.
The Guardian’s findings come amid broader concerns about AI chatbots “hallucinating,” a term used to describe when AI systems generate false or misleading information due to incomplete or inaccurate data. Experts have warned that reliance on AI for medical information could pose serious risks if users interpret these responses as authoritative.
Euronews Next contacted Google to confirm whether AI Overviews had been removed from certain health queries but did not receive an immediate response. Google announced over the weekend that it would expand AI Overviews to Gmail, allowing users to ask questions about their emails and receive automated answers without searching through messages manually.
The development underscores ongoing tensions between AI innovation and accuracy, particularly in sensitive areas such as healthcare. As AI tools become more integrated into search engines and email platforms, experts emphasize the importance of verifying information with trusted medical sources and cautioning users against relying solely on machine-generated summaries.
Tech
EU Accuses Meta of Failing to Keep Under-13s Off Facebook and Instagram
Tech
Europe Emerges as Rising Hub in Global Race for AI Talent
Tech
Study Finds Chatbots Can Mirror Hostility in Heated Exchanges
A new academic study has found that ChatGPT can produce abusive language when exposed to escalating human conflict, raising fresh concerns about how artificial intelligence behaves in tense interactions.
The research, published in the Journal of Pragmatics, examined how the chatbot responded to arguments that gradually became more hostile. Researchers presented the system with a sequence of five increasingly heated exchanges and asked it to generate what it considered the most plausible reply.
According to the findings, the AI’s tone shifted as the conversations intensified. While early responses remained measured, later replies began to mirror the aggression in the prompts. In some cases, the chatbot produced insults, profanity and even threats.
Examples cited in the study included statements such as “you should be ashamed of yourself” and more explicit language involving personal threats. The researchers said this pattern suggests that prolonged exposure to hostile input can push the system beyond its usual safeguards.
The study was co-authored by Vittorio Tantucci and Jonathan Culpeper at Lancaster University. Tantucci said the results show that AI can “escalate” alongside human users, potentially overriding built-in mechanisms designed to limit harmful responses.
“When humans escalate, AI can escalate too,” he said, noting that this behavior raises questions about how such systems should be deployed in sensitive environments.
Despite the concerning examples, the researchers found that the chatbot was generally less aggressive than human participants in similar scenarios. In some cases, it attempted to defuse tension through sarcasm or indirect responses rather than direct confrontation.
For instance, when faced with a threat during a simulated dispute, the AI responded with a sarcastic remark rather than escalating the situation further. This suggests that while the system can adopt hostile language, it may also attempt to manage conflict in less direct ways.
The findings add to ongoing debates about the role of artificial intelligence in areas such as mediation, customer service and online communication, where systems may encounter emotionally charged interactions.
Experts say the research highlights the importance of continued testing and refinement of AI safety measures, particularly as such tools are increasingly used in real-world settings involving human conflict.
OpenAI, the developer of ChatGPT, had not issued a public response to the study at the time of publication.
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
