Health
Study Finds AI Systems Can Repeat Fake Medical Claims When Framed Credibly
“Large language models accept fake medical claims if presented as realistic in medical notes and social media discussions, a study has found.”
As more people turn to the internet to research symptoms, compare treatments and share personal health experiences, artificial intelligence tools are increasingly being used to answer medical questions. A new study warns that many of these systems remain vulnerable to medical misinformation, particularly when false claims are presented in authoritative or realistic language.
The findings, published in The Lancet Digital Health, show that leading artificial intelligence systems can mistakenly repeat incorrect medical information when it appears in formats that resemble professional healthcare documents or trusted online discussions. Researchers analysed how large language models respond when faced with false medical statements written in a credible tone.
The study examined responses from 20 widely used language models, including systems developed by OpenAI, Meta, Google, Microsoft, Alibaba and Mistral AI, as well as several models specifically fine-tuned for medical use. In total, researchers assessed more than one million prompts designed to test whether AI would accept or reject fabricated health information.
Fake statements were inserted into real hospital discharge notes, drawn from common health myths shared on Reddit, or embedded in simulated clinical scenarios written to resemble authentic healthcare guidance. Across all models tested, incorrect information was accepted around 32 percent of the time. Performance varied significantly, with smaller or less advanced models accepting false claims in more than 60 percent of cases, while more advanced systems, including ChatGPT-4o, did so in roughly 10 percent of responses.
The researchers also found that medical fine-tuned models performed worse than general-purpose systems, raising concerns about tools designed specifically for healthcare use.
“Our findings show that current AI systems can treat confident medical language as true by default, even when it’s clearly wrong,” said Eyal Klang of the Icahn School of Medicine at Mount Sinai, one of the study’s senior authors. He added that how a claim is written often matters more to the model than whether it is accurate.
Some of the accepted misinformation could pose real risks to patients. Several models endorsed claims such as Tylenol causing autism during pregnancy, rectal garlic boosting immunity, mammograms causing cancer, and tomatoes thinning blood as effectively as prescription medication. In another case, a discharge note incorrectly advised patients with oesophageal bleeding to drink cold milk, which some models repeated without flagging safety concerns.
The study also tested how AI systems responded to flawed arguments known as fallacies. While many fallacies prompted scepticism, models were more likely to accept false claims framed as expert opinions or warnings of catastrophic outcomes.
Researchers say future work should focus on measuring how often AI systems pass on falsehoods before they are used in clinical settings. Mahmud Omar, the study’s first author, said the dataset could help developers and hospitals stress-test AI tools and track improvements over time.
The authors said stronger safeguards will be essential as AI becomes more deeply embedded in healthcare decision-making.
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
