Connect with us

Tech

EU’s Data Union Strategy Seeks to Boost AI and Cross-Border Data Use, but GDPR Stays Untouched

Published

on

As the European Commission’s consultation on the European Data Union Strategy (EDUS) nears its July 18 deadline, the initiative has drawn a mix of support and criticism. Aimed at stimulating data-driven innovation—particularly for generative AI—the strategy promises to simplify the EU’s complex data governance landscape. But its deliberate omission of any review of the General Data Protection Regulation (GDPR) has raised eyebrows.

The EDUS is positioned as a framework to streamline and harmonize existing EU data laws, including the Open Data Directive, the Data Act, and the Data Governance Act. Its goals include promoting broader access to data, incentivizing voluntary data sharing, reducing administrative burdens, and strengthening international data flows.

However, experts argue that the strategy avoids addressing some of the key barriers currently hampering the European data economy—chief among them, the GDPR. The strategy makes only vague references to maintaining “privacy and security standards,” without directly naming the GDPR. Despite its role as a cornerstone of EU data policy, GDPR remains politically sensitive and, according to Commission officials, too controversial to revisit.

This approach has sparked concerns, especially as many EU member states interpret GDPR’s definition of “personal data” narrowly, creating legal and practical barriers to initiatives that rely on open or shared data. The lack of meaningful exemptions under Article 6(f), which allows for processing of personal data in the public interest, continues to constrain innovation, particularly in sectors like AI and public services.

Beyond the GDPR issue, stakeholders have also highlighted several unresolved structural problems:

  1. Unfair B2B Data Sharing
    While the Data Act is designed to ensure fair access to data for smaller companies, in practice, large corporations continue to dominate through restrictive and often exploitative contracts. Legal dispute mechanisms exist but are rarely used by startups wary of prolonged battles with industry giants.

  2. Lack of Compensation for Public Institutions
    State-owned entities that manage valuable datasets face financial disincentives when required to open data for free. Without clear government compensation—such as Latvia’s model of reimbursing public registries—many institutions have little motivation to provide high-value data.

  3. Gap in Business Feedback on Data Infrastructure
    While the EU measures progress through tools like the Open Data Maturity Index, there is limited insight into how businesses experience the system. Missing are evaluations on usability, dataset relevance, and responsiveness of public authorities—factors critical to real-world data utility.

See also  OECD Warns of Sharp Rise in Cyberbullying Across Europe

As the EU pushes forward with its Data Union Strategy, experts warn that meaningful transformation will require more than legislation—it demands addressing the entrenched structural issues and political sensitivities that continue to limit the full potential of Europe’s digital economy.

Tech

Cambridge Index Reveals Global Black Market for Fake Social Media Verifications

Published

on

A new index developed by the University of Cambridge has revealed the scale and affordability of the underground market for fake social media account verifications, raising fresh concerns about online manipulation and digital security. According to researchers, fake verification badges can be purchased for as little as eight cents, enabling the rapid creation of networks that imitate authentic users across major online platforms.

The Cambridge Online Trust and Safety Index (COTSI), launched on Thursday, is described as the first global tool capable of tracking real-time prices for verifying fraudulent accounts. The index monitors more than 500 platforms, including TikTok, Instagram, Amazon, Spotify and Uber. By analysing data from sellers operating across the dark web and black-market channels, the project highlights how accessible and inexpensive these services have become.

Researchers say the low cost of creating fake accounts is contributing to the rise of “bot armies” — large groups of automated or semi-automated profiles designed to mimic genuine human activity. These networks can distort online conversations, amplify misleading content, and promote scams or commercial products. They can also be deployed to influence political messaging, creating an illusion of public support or opposition during major events such as elections or policy debates.

The team behind the index said the findings come at a sensitive time for governments and regulators working to contain misinformation. Many popular platforms have reduced investment in content monitoring during the past two years, while others have introduced programmes that reward users for generating high volumes of engagement. Researchers warn that such incentives may encourage the use of artificially inflated interactions, making fake accounts even more valuable to those seeking influence.

See also  Google’s Gemini Becomes Top Global Search Trend as New AI Model Reshapes Industry Debate

According to Cambridge analysts, the market for fraudulent verification has become highly sophisticated. Sellers offer tiered packages, guaranteeing features such as blue-badge symbols, verified rankings or the appearance of longstanding account history. Prices vary by platform and country, but the index shows that even the most complex packages remain within easy reach for groups attempting to manipulate public debate or carry out coordinated campaigns.

The launch of COTSI marks the first attempt to document these prices on a global scale. By presenting live data on the cost of creating fake identities, researchers hope to give policymakers, technology companies and security agencies a clearer picture of how digital manipulation is evolving. The study’s authors stress that tracking these markets is essential for understanding the risks posed by unauthenticated accounts, particularly during periods of political tension.

The university said the index will be updated regularly and will remain publicly accessible as part of its efforts to strengthen digital transparency worldwide.

Continue Reading

Tech

Experts Question Impact of Australia’s New Social Media Ban for Children Under 16

Published

on

Australia has introduced sweeping restrictions that prevent children under 16 from creating or maintaining accounts on major social media platforms, but experts warn the measures may not significantly change young people’s online behaviour. The restrictions, which took effect on December 10, apply to platforms including Facebook, Instagram, TikTok, Snapchat, YouTube, Twitch, Reddit and X.

Under the new rules, children cannot open accounts, yet they can still access most platforms without logging in—raising questions about how effective the regulations will be in shaping online habits. The eSafety Commissioner says the reforms are intended to shield children from online pressures, addictive design features and content that may harm their health and wellbeing.

Social media companies are required to block underage users through age-assurance tools that rely on facial-age estimation, ID uploads or parental consent. Ahead of the rollout, authorities tested 60 verification systems across 28,500 facial recognition assessments. The results showed that while many tools could distinguish children from adults, accuracy declined among users aged 16 and 17, girls and non-Caucasian users, where estimates could be off by two years or more. Experts say the limitations mean many teenagers may still find ways around the rules.

“How do they know who is 14 or 15 when the kids have all signed up as being 75?” asked Sonia Livingstone, a social psychology professor at the London School of Economics. She warned that misclassifications will be common as platforms attempt to enforce the regulations.

Meta acknowledged the challenge, saying complete accuracy is unlikely without requiring every user to present government ID—something the company argues would raise privacy and security concerns. Users over 16 who lose access by mistake are allowed to appeal.

See also  OECD Warns of Sharp Rise in Cyberbullying Across Europe

Several platforms have criticised the ban, arguing that it removes teenagers from safer, controlled environments. Meta and Google representatives told Australian lawmakers that logged-in teenage accounts already come with protections that limit contact from unknown users, filter sensitive subjects and disable personalised advertising. Experts say these protections are not always effective, citing studies where new YouTube and TikTok accounts quickly received misogynistic or self-harm-related content.

Analysts expect many teenagers to shift to smaller or lesser-regulated platforms. Apps such as Lemon8, Coverstar and Tango have surged into Australia’s top downloads since the start of December. Messaging apps like WhatsApp, Telegram and Signal—exempt from the ban—have also seen a spike in downloads. Livingstone said teenagers will simply “find alternative spaces,” noting that previous bans in other countries pushed young users to new platforms within days.

Researchers caution that gaming platforms such as Discord and Roblox, also outside the scope of the ban, may become new gathering points for young Australians. Studies will be conducted to assess the long-term impact on mental health and whether the restrictions support or complicate parents’ efforts to regulate screen time.

Experts say it may take several years to determine whether the ban delivers meaningful improvements to children’s wellbeing.

Continue Reading

Tech

OECD Warns of Sharp Rise in Cyberbullying Across Europe

Published

on

Cyberbullying among adolescents has increased across every European country included in a new report by the Organisation for Economic Co-operation and Development (OECD), raising concerns among researchers, educators and child-protection advocates. The findings, part of the OECD’s How’s Life for Children in the Digital Age? report, show that online harassment is now affecting young people in all 29 countries and regions surveyed, with wide disparities between nations.

The data, which covers children aged 11, 13 and 15, reveals rates ranging from 7.5 per cent in Spain to 27.1 per cent in Lithuania. The European average stands at 15.5 per cent. Alongside Lithuania, the countries with the highest levels include Latvia, Poland, England, Hungary, Estonia, Ireland, Scotland, Slovenia, Sweden, Wales, Finland and Denmark. Nations such as Portugal, Greece, France, Germany and Italy recorded lower-than-average levels.

Cyberbullying in the study refers to repeated or intentional harassment online, including hostile messages, posts designed to ridicule, or the sharing of unflattering or inappropriate images without consent. The OECD noted that online abuse often involves a power imbalance and is amplified by the reach of digital platforms.

Experts attribute national differences to a combination of technological access, cultural norms and institutional preparedness. James O’Higgins Norman, UNESCO Chair on Bullying and Cyberbullying at Dublin City University, said variations in smartphone use, internet penetration and dominant social media platforms influence how often young people are exposed to harmful interactions. He added that cultural attitudes toward conflict and aggression, as well as the quality of school-based prevention programmes, shape each country’s experience.

See also  Meta to Use AI Interactions for Advertising, Raising Fresh Privacy Concerns

Specialists from the European Antibullying Network pointed to digital literacy as a key factor. Countries that teach online safety as part of the school curriculum tend to see better outcomes. They also highlighted broader social and economic inequalities, noting that communities with fewer resources often struggle to support vulnerable children effectively.

The report shows that cyberbullying increased everywhere between the 2017–18 and 2021–22 survey periods. Denmark, Lithuania, Norway, Slovenia, Iceland and the Netherlands recorded jumps of more than five percentage points. The OECD average rose from 12.1 to 15.5 per cent. Researchers say the rise coincided with increased access to smartphones and longer daily screen time among adolescents.

Experts agree that the COVID-19 pandemic accelerated the trend. With schools closed and socialising taking place online, young people spent more time on platforms where conflicts could quickly escalate. Digital environments that offer anonymity and instant communication can weaken empathy and accountability, making hostile behaviour more likely, O’Higgins Norman said. He added that some countries are now reporting signs of stabilisation as in-person schooling has resumed.

Girls are more likely than boys to report being cyberbullied in most countries. Across the OECD sample, the rate is 16.4 per cent for girls and 14.3 per cent for boys. Researchers link this gap to the nature of online interactions, as girls tend to engage more in social-media communication, where relational forms of aggression — such as exclusion or image-based harassment — are more common.

Family structure also plays a significant role. Adolescents living in one-parent households report a cyberbullying rate of 19.8 per cent, compared with 14.1 per cent among those living with two parents. Experts say single parents often face heavier time and financial pressures, reducing their capacity to supervise online activity. Young people in such households may also spend more time online for social connection, increasing exposure to risk.

See also  OpenAI Faces Debate Over Plan to Allow Adult Content in ChatGPT

The OECD’s findings add to growing calls for more comprehensive national strategies, stronger digital-literacy education and support structures that reflect the realities of adolescent online life.

Continue Reading

Trending