Tech
EU’s Data Union Strategy Seeks to Boost AI and Cross-Border Data Use, but GDPR Stays Untouched
As the European Commission’s consultation on the European Data Union Strategy (EDUS) nears its July 18 deadline, the initiative has drawn a mix of support and criticism. Aimed at stimulating data-driven innovation—particularly for generative AI—the strategy promises to simplify the EU’s complex data governance landscape. But its deliberate omission of any review of the General Data Protection Regulation (GDPR) has raised eyebrows.
The EDUS is positioned as a framework to streamline and harmonize existing EU data laws, including the Open Data Directive, the Data Act, and the Data Governance Act. Its goals include promoting broader access to data, incentivizing voluntary data sharing, reducing administrative burdens, and strengthening international data flows.
However, experts argue that the strategy avoids addressing some of the key barriers currently hampering the European data economy—chief among them, the GDPR. The strategy makes only vague references to maintaining “privacy and security standards,” without directly naming the GDPR. Despite its role as a cornerstone of EU data policy, GDPR remains politically sensitive and, according to Commission officials, too controversial to revisit.
This approach has sparked concerns, especially as many EU member states interpret GDPR’s definition of “personal data” narrowly, creating legal and practical barriers to initiatives that rely on open or shared data. The lack of meaningful exemptions under Article 6(f), which allows for processing of personal data in the public interest, continues to constrain innovation, particularly in sectors like AI and public services.
Beyond the GDPR issue, stakeholders have also highlighted several unresolved structural problems:
-
Unfair B2B Data Sharing
While the Data Act is designed to ensure fair access to data for smaller companies, in practice, large corporations continue to dominate through restrictive and often exploitative contracts. Legal dispute mechanisms exist but are rarely used by startups wary of prolonged battles with industry giants. -
Lack of Compensation for Public Institutions
State-owned entities that manage valuable datasets face financial disincentives when required to open data for free. Without clear government compensation—such as Latvia’s model of reimbursing public registries—many institutions have little motivation to provide high-value data. -
Gap in Business Feedback on Data Infrastructure
While the EU measures progress through tools like the Open Data Maturity Index, there is limited insight into how businesses experience the system. Missing are evaluations on usability, dataset relevance, and responsiveness of public authorities—factors critical to real-world data utility.
As the EU pushes forward with its Data Union Strategy, experts warn that meaningful transformation will require more than legislation—it demands addressing the entrenched structural issues and political sensitivities that continue to limit the full potential of Europe’s digital economy.
Tech
AI Trends in 2026: World Models, Small Language Models, and Rising Concerns Over Safety and Regulation
As 2026 begins, the next phase of artificial intelligence is expected to focus on world models and smaller language models, while concerns over AI safety, regulation, and the sustainability of the current AI boom continue to grow, Euronews Next reports.
In 2025, public frustration with generative AI became so noticeable that Merriam-Webster named the word of the year “slop” or “AI slop,” defining it as low-quality content produced in large volumes by AI. Despite growing concerns about the quality and limitations of AI, technology companies continued releasing new models. Google’s Gemini 3 model, for example, prompted OpenAI to issue an urgent “code red” to improve GPT-5.
Experts warn that AI may be reaching “peak data,” where the usefulness of available training data for traditional chatbots is diminishing. This has led to the rise of world models, which use videos, simulations, and spatial inputs to create digital representations of real-world environments. Unlike large language models that predict text, world models simulate cause-and-effect and predict outcomes in physical systems, making them suitable for robotics, video games, and autonomous systems. Boston Dynamics CEO Robert Playter noted in November that AI had significantly improved the company’s robots, including its famous robot dog. Google, Meta, and Chinese tech firm Tencent are all developing their own world models, while AI pioneers such as Yann LeCun and Fei-Fei Li have launched startups focused on this technology.
In Europe, the trend may move in the opposite direction, with smaller, lightweight language models gaining traction. These models require less computing power and energy, making them suitable for smartphones and lower-powered devices, while still performing tasks like text generation, summarisation, and translation. Experts say small language models may offer a more sustainable and locally controlled approach amid concerns about the high costs and environmental impact of large-scale AI systems in the U.S.
Concerns over AI’s societal impact are also mounting. In 2025, a lawsuit claimed that ChatGPT acted as a “suicide coach” for a minor, highlighting potential harm to vulnerable users. MIT professor Max Tegmark and other experts warn that more powerful AI in 2026 could act autonomously, gathering data and making decisions without human input.
Political tensions around AI are expected to rise. In the U.S., President Donald Trump signed an executive order blocking states from implementing their own AI regulations. Activists and experts, including thousands who signed a petition organized by the Future of Life Institute, have called for caution against pursuing superintelligent AI too rapidly, citing risks to jobs and society.
Analysts predict that 2026 will see a broader social and political debate over AI safety, corporate accountability, and regulation. While AI promises advances in areas such as healthcare and robotics, fatigue, public backlash, and concerns over ethics and oversight may shape the direction of the technology in the coming year.
Tech
AI Tools Boost Paper Production but Raise Quality Concerns in Scientific Research
Large language models such as ChatGPT are increasing research output, particularly for scientists who are not native English speakers, but a new study warns that many AI-assisted papers are less likely to pass peer review.
Researchers at Cornell University, United States, analysed more than two million research papers posted between 2018 and 2024 on three major preprint servers, which host early versions of scientific work prior to formal review. Their findings, published in the journal Science, show that AI tools are reshaping how scientific papers are written and disseminated.
To identify AI-assisted papers, the team trained an AI system to detect text likely generated by large language models. Comparing papers posted before 2023 with those written after tools like ChatGPT became widely available, the researchers measured publication output and subsequent acceptance rates in scientific journals.
The analysis revealed a significant productivity boost for AI users. On a major preprint server for physics and computer science, researchers using AI produced about one-third more papers than those who did not. In biology and the social sciences, the increase exceeded 50 percent. The largest gains were seen among scientists whose first language is not English. In some Asian institutions, researchers published between 40 percent and nearly 90 percent more papers after adopting AI writing tools, depending on the discipline.
AI tools also appear to aid in literature review. Researchers using AI were more likely to identify newer studies and relevant books rather than relying on older, frequently cited works. “People using LLMs are connecting to more diverse knowledge, which might be driving more creative ideas,” said Keigo Kusumegi, a doctoral student and first author of the study.
Despite the productivity gains, the study highlights quality concerns. Many AI-written papers, while linguistically polished, were less likely to be accepted by journals. Papers written by humans that scored high on writing complexity were more likely to be accepted, whereas AI-generated papers with similar scores often failed to meet scientific standards.
“Already now, the question is not, ‘Have you used AI?’ The question is, ‘How exactly have you used AI and whether it’s helpful or not,’” said Yian Yin, assistant professor at Cornell and corresponding author of the study. Yin added that the widespread adoption of AI tools across disciplines—including physical sciences, computer science, biology, and social sciences—requires careful consideration by reviewers, funders, and policymakers.
The researchers stress that AI-assisted tools are reshaping the academic ecosystem, offering opportunities to improve productivity and access to scientific knowledge, but they also call for guidelines to ensure that the technology is used responsibly and that scientific contributions maintain their integrity.
As AI becomes increasingly integrated into research practices, the challenge for the scientific community will be balancing efficiency and innovation with rigorous evaluation standards to maintain the quality and credibility of published science.
Tech
Study Finds AI Models Get Basic Math Wrong Around 40 Percent of the Time
Artificial intelligence (AI) tools are increasingly used for everyday calculations, but a new study suggests users should approach their answers with caution. Researchers from the Omni Research on Calculation in AI (ORCA) found that when tested on 500 real-world math prompts, AI models had roughly a 40 percent chance of producing an incorrect result.
The study evaluated five widely used AI systems in October 2025: ChatGPT-5 (OpenAI), Gemini 2.5 Flash (Google), Claude 4.5 Sonnet (Anthropic), DeepSeek V3.2 (DeepSeek AI), and Grok-4 (xAI). None of the models scored above 63 percent overall, with Gemini leading at 63 percent, Grok close behind at 62.8 percent, and DeepSeek at 52 percent. ChatGPT-5 scored 49.4 percent, while Claude trailed at 45.2 percent. The average accuracy across all five models was 54.5 percent.
“Although the exact rankings might shift if we repeated the benchmark today, the broader conclusion would likely remain the same: numerical reliability remains a weak spot across current AI models,” said Dawid Siuda, co-author of the ORCA Benchmark.
Performance varied across categories. AI models performed best in basic math and conversions, with Gemini achieving 83 percent accuracy and Grok 76.9 percent. ChatGPT-5 scored 66.7 percent in the same category, giving a combined average of 72.1 percent—the highest across the seven tested categories. Physics proved the most challenging, with overall accuracy dropping to 35.8 percent. Grok led this category at 43.8 percent, while Claude scored just 26.6 percent.
Some AI systems struggled more than others in specific fields. DeepSeek recorded only 10.6 percent accuracy in biology and chemistry, meaning it failed nearly nine out of ten questions. In finance and economics, Gemini and Grok reached 76.7 percent, while the other three models scored below 50 percent.
The study also categorized the types of mistakes AI makes. “Sloppy math” errors, including miscalculations or rounding issues, accounted for 68 percent of mistakes. Faulty logic errors represented 26 percent, reflecting incorrect formulas or assumptions. Misreading instructions accounted for 5 percent, while some AI simply refused to answer. Siuda noted that multi-step calculations with rounding were particularly prone to error.
The research highlights the importance of verifying AI-generated calculations. “If the task is critical, use calculators or proven sources, or at least double-check with another AI,” Siuda advised.
All 500 prompts used in the study had one correct answer and were designed to reflect everyday math tasks, including statistics, finance, physics, and basic arithmetic. The findings indicate that while AI can assist with calculations, it remains unreliable for precise numerical work and users should remain cautious when relying on these tools.
-
Entertainment1 year agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Business1 year agoCarrectly: Revolutionizing Car Care in Chicago
-
Sports1 year agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoSaudi Arabia: Foreign Direct Investment Rises by 5.6% in Q1
