Health
WHO Warns Europe Is Rolling Out Health Care AI Without Adequate Safeguards
Artificial intelligence is rapidly gaining ground in Europe’s health systems, offering new tools for diagnosis, patient support, and administrative efficiency. Yet a new World Health Organization (WHO) report warns that the technology is advancing without the policies needed to protect patients and health workers.
The assessment examined 50 countries across Europe and Central Asia and found wide differences in how health-related AI is adopted, funded, and regulated. While enthusiasm for digital tools is growing, only a handful of nations have built the frameworks required to manage risks.
According to the report, half of the surveyed countries now use AI chatbots to support patients. Thirty-two health systems have adopted AI-based diagnostics, most commonly for imaging and detection. Several countries are also piloting AI tools for screening programmes, pathology, mental health support, data analysis, administrative work, and workforce planning.
Examples cited in the study include Spain, which is trialling AI for early disease detection. Finland is using AI for staff training, and Estonia is applying it to large-scale data processing. Many governments have identified key priorities for integrating these tools, but far fewer have committed long-term financial support. While 26 countries have defined their goals, only 14 have set aside funding. Just four — Andorra, Finland, Slovakia, and Sweden — have national strategies dedicated specifically to AI in health.
Dr Hans Kluge, who leads the WHO’s Europe office, cautioned that technology alone cannot deliver better care. He said AI will only serve patients effectively if governments build strong systems around it, including privacy protections, legal rules, and training programmes. “AI is on the verge of revolutionising health care, but its promise will only be realised if people and patients remain at the centre of every decision,” he said.
The report highlights a key problem: AI systems depend on large datasets that may be biased, flawed, or incomplete. If those gaps shape how an algorithm interprets symptoms or medical images, the result may be an incorrect diagnosis or inappropriate treatment. WHO experts said governments must define who is responsible when AI tools make errors that affect patient safety.
The organisation urged countries to align AI development with broader public health goals and strengthen laws to address ethical and safety concerns. It also recommended training health workers to use digital tools with confidence and informing the public clearly about how AI is applied in care settings.
Dr David Novillo Ortiz, who oversees work on AI and digital health at the WHO’s Europe office, said unclear standards may already be causing hesitation among medical staff. He urged governments to guarantee that AI tools are tested thoroughly for safety, fairness, and real-world performance before they are used with patients.
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
