Connect with us

Tech

Dutch Regulator Struggles to Process Cross-Border Digital Complaints Under EU Law

Published

on

The Dutch Authority for Consumers and Markets (ACM) has reported significant challenges in handling cross-border complaints under the EU’s Digital Services Act (DSA), raising concerns about enforcement delays and regulatory gaps across the bloc.

In its 2024 annual report, released earlier this month, the ACM disclosed that it received 256 complaints concerning the conduct of online platforms. Of those, 156 involved companies based in other EU member states. However, nearly two-thirds of these — 96 complaints — remain unresolved due to technical and administrative obstacles.

According to the ACM, many of the complaints could not be forwarded to the appropriate Digital Services Coordinators (DSCs) in other EU countries because some national enforcement bodies are not yet operational or accessible. In other cases, additional information was requested from complainants but had not yet been provided.

The report stated: “They can’t be transmitted to other Digital Services Coordinators due to technical issues, such as non-existing DSCs. A small part is pending due to administrative issues.”

Of the complaints that were successfully transferred, 52 were sent to Ireland — the base of many major tech firms — while smaller numbers went to regulators in Germany, Luxembourg, Belgium, and Lithuania.

The DSA, which has applied to very large online platforms since 2023 and to smaller ones from February 2024, is a landmark piece of legislation intended to improve digital accountability and user protection. It requires platforms to assess and mitigate systemic risks, provide tools for content moderation, publish transparency reports, and establish advertising repositories.

Responsibility for enforcement is divided between the European Commission — which oversees the 25 largest platforms with more than 45 million monthly users — and national regulators, who are tasked with supervising smaller companies headquartered within their jurisdictions.

See also  Microsoft Unveils ‘Mico’: A Friendly New Face for Copilot Assistant

In the Netherlands, the ACM noted that none of the complaints involving Dutch platforms have progressed to formal investigations. This is due to delays in granting investigative powers and the lack of an approved implementation law from the Dutch Parliament.

Most of the complaints submitted to the ACM in 2024 concerned account restrictions and illegal content — issues that are central to the DSA’s user protection goals.

The challenges faced by the ACM are not unique. In May, the European Commission referred five countries — Czechia, Cyprus, Poland, Portugal, and Spain — to the EU Court of Justice for failing to implement the DSA correctly. Bulgaria was also warned to address compliance shortcomings within two months or face similar legal action.

The situation underscores the growing pains in rolling out the DSA across a fragmented regulatory landscape and highlights the need for faster coordination and implementation among EU member states.

Tech

Cambridge Index Reveals Global Black Market for Fake Social Media Verifications

Published

on

A new index developed by the University of Cambridge has revealed the scale and affordability of the underground market for fake social media account verifications, raising fresh concerns about online manipulation and digital security. According to researchers, fake verification badges can be purchased for as little as eight cents, enabling the rapid creation of networks that imitate authentic users across major online platforms.

The Cambridge Online Trust and Safety Index (COTSI), launched on Thursday, is described as the first global tool capable of tracking real-time prices for verifying fraudulent accounts. The index monitors more than 500 platforms, including TikTok, Instagram, Amazon, Spotify and Uber. By analysing data from sellers operating across the dark web and black-market channels, the project highlights how accessible and inexpensive these services have become.

Researchers say the low cost of creating fake accounts is contributing to the rise of “bot armies” — large groups of automated or semi-automated profiles designed to mimic genuine human activity. These networks can distort online conversations, amplify misleading content, and promote scams or commercial products. They can also be deployed to influence political messaging, creating an illusion of public support or opposition during major events such as elections or policy debates.

The team behind the index said the findings come at a sensitive time for governments and regulators working to contain misinformation. Many popular platforms have reduced investment in content monitoring during the past two years, while others have introduced programmes that reward users for generating high volumes of engagement. Researchers warn that such incentives may encourage the use of artificially inflated interactions, making fake accounts even more valuable to those seeking influence.

See also  EU Political Ad Ban Sparks Fears of Boosting Illiberal Regimes

According to Cambridge analysts, the market for fraudulent verification has become highly sophisticated. Sellers offer tiered packages, guaranteeing features such as blue-badge symbols, verified rankings or the appearance of longstanding account history. Prices vary by platform and country, but the index shows that even the most complex packages remain within easy reach for groups attempting to manipulate public debate or carry out coordinated campaigns.

The launch of COTSI marks the first attempt to document these prices on a global scale. By presenting live data on the cost of creating fake identities, researchers hope to give policymakers, technology companies and security agencies a clearer picture of how digital manipulation is evolving. The study’s authors stress that tracking these markets is essential for understanding the risks posed by unauthenticated accounts, particularly during periods of political tension.

The university said the index will be updated regularly and will remain publicly accessible as part of its efforts to strengthen digital transparency worldwide.

Continue Reading

Tech

Experts Question Impact of Australia’s New Social Media Ban for Children Under 16

Published

on

Australia has introduced sweeping restrictions that prevent children under 16 from creating or maintaining accounts on major social media platforms, but experts warn the measures may not significantly change young people’s online behaviour. The restrictions, which took effect on December 10, apply to platforms including Facebook, Instagram, TikTok, Snapchat, YouTube, Twitch, Reddit and X.

Under the new rules, children cannot open accounts, yet they can still access most platforms without logging in—raising questions about how effective the regulations will be in shaping online habits. The eSafety Commissioner says the reforms are intended to shield children from online pressures, addictive design features and content that may harm their health and wellbeing.

Social media companies are required to block underage users through age-assurance tools that rely on facial-age estimation, ID uploads or parental consent. Ahead of the rollout, authorities tested 60 verification systems across 28,500 facial recognition assessments. The results showed that while many tools could distinguish children from adults, accuracy declined among users aged 16 and 17, girls and non-Caucasian users, where estimates could be off by two years or more. Experts say the limitations mean many teenagers may still find ways around the rules.

“How do they know who is 14 or 15 when the kids have all signed up as being 75?” asked Sonia Livingstone, a social psychology professor at the London School of Economics. She warned that misclassifications will be common as platforms attempt to enforce the regulations.

Meta acknowledged the challenge, saying complete accuracy is unlikely without requiring every user to present government ID—something the company argues would raise privacy and security concerns. Users over 16 who lose access by mistake are allowed to appeal.

See also  Italy Enforces Strict Age Checks on Adult Websites as Europe Tightens Online Safety Rules

Several platforms have criticised the ban, arguing that it removes teenagers from safer, controlled environments. Meta and Google representatives told Australian lawmakers that logged-in teenage accounts already come with protections that limit contact from unknown users, filter sensitive subjects and disable personalised advertising. Experts say these protections are not always effective, citing studies where new YouTube and TikTok accounts quickly received misogynistic or self-harm-related content.

Analysts expect many teenagers to shift to smaller or lesser-regulated platforms. Apps such as Lemon8, Coverstar and Tango have surged into Australia’s top downloads since the start of December. Messaging apps like WhatsApp, Telegram and Signal—exempt from the ban—have also seen a spike in downloads. Livingstone said teenagers will simply “find alternative spaces,” noting that previous bans in other countries pushed young users to new platforms within days.

Researchers caution that gaming platforms such as Discord and Roblox, also outside the scope of the ban, may become new gathering points for young Australians. Studies will be conducted to assess the long-term impact on mental health and whether the restrictions support or complicate parents’ efforts to regulate screen time.

Experts say it may take several years to determine whether the ban delivers meaningful improvements to children’s wellbeing.

Continue Reading

Tech

OECD Warns of Sharp Rise in Cyberbullying Across Europe

Published

on

Cyberbullying among adolescents has increased across every European country included in a new report by the Organisation for Economic Co-operation and Development (OECD), raising concerns among researchers, educators and child-protection advocates. The findings, part of the OECD’s How’s Life for Children in the Digital Age? report, show that online harassment is now affecting young people in all 29 countries and regions surveyed, with wide disparities between nations.

The data, which covers children aged 11, 13 and 15, reveals rates ranging from 7.5 per cent in Spain to 27.1 per cent in Lithuania. The European average stands at 15.5 per cent. Alongside Lithuania, the countries with the highest levels include Latvia, Poland, England, Hungary, Estonia, Ireland, Scotland, Slovenia, Sweden, Wales, Finland and Denmark. Nations such as Portugal, Greece, France, Germany and Italy recorded lower-than-average levels.

Cyberbullying in the study refers to repeated or intentional harassment online, including hostile messages, posts designed to ridicule, or the sharing of unflattering or inappropriate images without consent. The OECD noted that online abuse often involves a power imbalance and is amplified by the reach of digital platforms.

Experts attribute national differences to a combination of technological access, cultural norms and institutional preparedness. James O’Higgins Norman, UNESCO Chair on Bullying and Cyberbullying at Dublin City University, said variations in smartphone use, internet penetration and dominant social media platforms influence how often young people are exposed to harmful interactions. He added that cultural attitudes toward conflict and aggression, as well as the quality of school-based prevention programmes, shape each country’s experience.

See also  New York City Sues Tech Giants Over Alleged Role in Youth Mental Health Crisis

Specialists from the European Antibullying Network pointed to digital literacy as a key factor. Countries that teach online safety as part of the school curriculum tend to see better outcomes. They also highlighted broader social and economic inequalities, noting that communities with fewer resources often struggle to support vulnerable children effectively.

The report shows that cyberbullying increased everywhere between the 2017–18 and 2021–22 survey periods. Denmark, Lithuania, Norway, Slovenia, Iceland and the Netherlands recorded jumps of more than five percentage points. The OECD average rose from 12.1 to 15.5 per cent. Researchers say the rise coincided with increased access to smartphones and longer daily screen time among adolescents.

Experts agree that the COVID-19 pandemic accelerated the trend. With schools closed and socialising taking place online, young people spent more time on platforms where conflicts could quickly escalate. Digital environments that offer anonymity and instant communication can weaken empathy and accountability, making hostile behaviour more likely, O’Higgins Norman said. He added that some countries are now reporting signs of stabilisation as in-person schooling has resumed.

Girls are more likely than boys to report being cyberbullied in most countries. Across the OECD sample, the rate is 16.4 per cent for girls and 14.3 per cent for boys. Researchers link this gap to the nature of online interactions, as girls tend to engage more in social-media communication, where relational forms of aggression — such as exclusion or image-based harassment — are more common.

Family structure also plays a significant role. Adolescents living in one-parent households report a cyberbullying rate of 19.8 per cent, compared with 14.1 per cent among those living with two parents. Experts say single parents often face heavier time and financial pressures, reducing their capacity to supervise online activity. Young people in such households may also spend more time online for social connection, increasing exposure to risk.

See also  MIT Study Warns of Cognitive Decline Linked to ChatGPT Use

The OECD’s findings add to growing calls for more comprehensive national strategies, stronger digital-literacy education and support structures that reflect the realities of adolescent online life.

Continue Reading

Trending