Tech
EU Pushes AI Adoption as Use Remains Uneven Across Europe
The European Union is funding AI adoption, drafting preparedness plans, and issuing ethics guidance, but AI tool use remains uneven and sometimes taboo. With 64 percent of Europeans saying AI literacy will be essential by 2030, the real test is turning ambition into measurable, high-scale outcomes.
The EU continues to support individuals and businesses in adopting AI technologies while issuing guidance on ethical use. According to a Eurobarometer survey on the future needs of digital education, nearly two-thirds of Europeans agree that AI skills will be crucial for everyone within the next decade.
Since 2021, AI adoption among European enterprises has grown by 12.3 percent, though only 19.95 percent of businesses currently use at least one AI tool. Adoption varies widely across the continent. Denmark leads with 42.03 percent of businesses using AI, followed by Finland, Sweden, Belgium, and Luxembourg, while Romania, Poland, Bulgaria, Greece, and Cyprus remain below 10 percent. Differences in AI maturity also exist, with some companies integrating AI strategically while others rely on individual tools without broader transformation plans.
Individual use of AI also shows disparities. About a third of Europeans report having used AI tools, though only 9.8 percent use generative AI for formal education. Sweden, Malta, Denmark, Spain, and Estonia rank highest in educational use, while Hungary, Romania, Poland, Bulgaria, and Germany trail far behind. Generative AI is more widely used for work, with 15.07 percent of Europeans reporting usage, led by Malta, Denmark, the Netherlands, Estonia, and Finland. For private purposes, around a quarter of Europeans use generative AI, with Cyprus, Greece, Estonia, and Malta at the top and Hungary at the bottom.
OpenAI’s ChatGPT dominates the European market with over 80 percent share, serving 120.4 million active users in the EU, roughly 26 percent of the population. Other AI tools, including Microsoft Copilot, Google Gemini, Claude, and Perplexity, account for the remainder.
The Eurobarometer survey shows Europeans have a balanced view on AI in classrooms, with 54 percent recognizing both benefits and risks and 22 percent opposing its use entirely. Experts say the EU must improve access to safe, age-appropriate AI tools for students and educators, especially in countries with lower digital skills and internet access. AI can also support teaching learners with learning difficulties and disabilities, offering opportunities to personalize instruction.
While the EU has launched strategies such as the AI Continent Action Plan and Apply AI initiative, experts emphasize that measurable KPIs, targeted support by sector, and differentiation by business size and maturity are critical to turning policy into high-impact outcomes without wasting public resources.
Europe faces a key challenge: ensuring AI adoption keeps pace with ambition and delivers tangible results across education, business, and daily life.
Tech
Danish Apps Surge as Citizens Seek to Avoid American Products Amid Trump Greenland Remarks
Mobile applications that help consumers identify and avoid American-made products have soared to the top of Denmark’s app store charts following US President Donald Trump’s recent comments about acquiring Greenland.
Danish shoppers are turning to the apps as a way to express their opposition to the idea of the United States purchasing the Arctic territory. Two apps, in particular, have seen a dramatic rise in downloads, with one app, UdenUSA—translated as NonUSA in English—becoming the most downloaded app in Denmark, surpassing even ChatGPT on the App Store.
UdenUSA allows users to scan products to determine their country of origin and suggests alternatives from nations other than the United States. Users can also add these alternative products to a shopping cart. Jonas Pipper, one of the app’s developers, told Denmark’s public broadcaster DR Nyheder that the app was designed to give consumers more clarity about their purchases rather than explicitly encouraging a boycott.
Another popular app, Made O’Meter, has also climbed the charts and currently ranks fifth on the Danish App Store. Both apps have gained attention as a tool for consumers to take tangible action in response to political developments.
Experts, however, say the economic impact of such boycotts is likely to be limited. American-made products account for only a small fraction of goods sold in Denmark. Louise Aggerstrøm Hansen, a private economist at Danske Bank, said roughly 1 percent of Danish food consumption comes directly from the United States, making it difficult to measure the real effect of the boycott.
Despite this, researchers note that the apps may offer users a sense of agency in response to political events. “A lot of people watch the news and see something they don’t like and get angry about it. In this case, it’s about ourselves and Greenland,” said Pelle Guldborg Hansen, a behavioural researcher at Roskilde University. “And then you just want to do something with your anger. No matter how small it is,” he added.
Trump has repeated his suggestion that the US should acquire Greenland since early January, prompting diplomatic meetings between officials from Greenland, Denmark, and the United States. The discussions have been described as “agreeing to disagree,” while public protests against any US takeover of the island have taken place across Greenland and Denmark.
The surge in downloads for these apps reflects a broader trend of citizens seeking ways to express political discontent through daily consumer choices. While the practical impact on American exports to Denmark may be minor, the apps provide a visible avenue for individuals to respond to international political debates and assert their views at a personal level.
Tech
Google Removes Some AI Health Summaries After Accuracy Concerns
Google has reportedly removed certain AI-generated summaries for health-related searches after an investigation found that some of the information provided could be misleading.
The summaries, known as AI Overviews, appear at the top of search results and are designed to provide concise answers to user questions. A report by the Guardian newspaper found that several AI Overviews contained inaccurate health information, raising concerns about potential harm to users.
The investigation highlighted cases where the AI supplied numbers with little context in response to queries such as “what is the normal range for liver blood tests?” and “what is the normal range for liver function tests?” The results did not account for differences based on age, sex, ethnicity, or nationality. In some cases, Google’s AI extracted data from Max Healthcare, an Indian hospital chain in New Delhi, rather than providing verified global medical guidance.
Featured snippets, which also appear at the top of Google search results, differ from AI Overviews because they extract existing text from relevant websites rather than generating new content. However, the Guardian noted that even variations of liver test queries, such as “[liver function test] lft reference range,” continued to produce AI-generated summaries. Liver function tests measure proteins and enzymes in the blood to evaluate how well the liver is performing.
In one example, Google’s AI reportedly advised pancreatic cancer patients to avoid high-fat foods. Experts told the Guardian that such guidance could be dangerous, potentially increasing the risk of mortality among patients.
The Guardian’s findings come amid broader concerns about AI chatbots “hallucinating,” a term used to describe when AI systems generate false or misleading information due to incomplete or inaccurate data. Experts have warned that reliance on AI for medical information could pose serious risks if users interpret these responses as authoritative.
Euronews Next contacted Google to confirm whether AI Overviews had been removed from certain health queries but did not receive an immediate response. Google announced over the weekend that it would expand AI Overviews to Gmail, allowing users to ask questions about their emails and receive automated answers without searching through messages manually.
The development underscores ongoing tensions between AI innovation and accuracy, particularly in sensitive areas such as healthcare. As AI tools become more integrated into search engines and email platforms, experts emphasize the importance of verifying information with trusted medical sources and cautioning users against relying solely on machine-generated summaries.
Tech
ChatGPT Launches Health Feature to Help Users Manage Medical Information
OpenAI has unveiled a new health-focused feature for ChatGPT, aimed at helping users better understand their well-being and prepare for medical conversations. The tool, called ChatGPT Health, connects users’ personal health data, such as medical records and wellness apps, to deliver more personalized guidance.
The feature is designed as a standalone experience within ChatGPT, with health-related chats, files, and connected apps kept separate from users’ other conversations. OpenAI said health information is not shared with non-health chats, and users can view or delete Health memories at any time through the platform’s settings.
“ChatGPT Health is another step toward turning ChatGPT into a personal super-assistant that can support you with information and tools to achieve your goals across any part of your life,” Fidji Simo, OpenAI’s applications CEO, said in a post on Substack.
Users can connect apps such as Apple Health, MyFitnessPal, and Function to ChatGPT Health. The AI can then help interpret recent test results, offer guidance for doctor appointments, and provide insights on diet, exercise routines, or healthcare choices. OpenAI emphasized that all app connections require explicit user permission and undergo additional privacy and security reviews.
OpenAI stressed that the tool is not intended to replace medical care. ChatGPT Health is designed to assist users in understanding patterns in their health and supporting everyday wellness questions. The company said the platform was developed with input from more than 260 physicians across 60 countries, who provided feedback on model outputs over 600,000 times.
Health-related queries are already a major reason people use ChatGPT, with the company reporting that over 230 million questions about health and wellness are asked globally each week. ChatGPT Health aims to make these interactions more personalized by leveraging data from users’ medical and wellness apps.
Access to ChatGPT Health is initially limited to a small group of early users with Free, Go, Plus, or Pro accounts. Users in the European Economic Area, Switzerland, and the United Kingdom are not included in the early rollout due to stricter local health and data regulations. Some app integrations and medical record access are currently only available in the United States.
OpenAI said it plans to expand ChatGPT Health to all users on web and iOS in the coming weeks as the platform is refined.
The company highlighted that the feature is meant to complement, not replace, professional medical advice. By providing insights from personal health data and helping users track trends over time, ChatGPT Health seeks to make individuals better prepared for discussions with their healthcare providers.
-
Entertainment1 year agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
-
Sports1 year agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoSaudi Arabia: Foreign Direct Investment Rises by 5.6% in Q1
