Connect with us

Tech

UN Launches Global Effort to Govern Artificial Intelligence Amid Growing Concerns

Published

on

Artificial intelligence (AI) dominated discussions at the United Nations this week as world leaders convened in New York to debate both its potential benefits and its risks, while the UN announced new bodies designed to shape international AI governance.

Addressing the Security Council on Wednesday, UN Secretary-General Antonio Guterres said the challenge was no longer whether AI would impact global security, but how nations could manage its influence responsibly.

“AI can strengthen prevention and protection, anticipating food insecurity and displacement, supporting de-mining, helping identify potential outbreaks of violence, and so much more,” Guterres said. “But without guardrails, it can also be weaponised.”

The Council’s debate focused on preventing the misuse of AI in military and security operations, especially its potential to fuel misinformation and escalate conflicts. European leaders urged the UN to take a proactive role, warning that the technology should never be deployed without human oversight.

Greek Prime Minister Kyriakos Mitsotakis likened the moment to past global challenges. “Just as the Council once rose to meet the challenges of nuclear weapons or peacekeeping, so too now it must rise to govern the age of AI,” he said.

British Deputy Prime Minister David Lammy highlighted AI’s promise for peacebuilding, noting its capacity for “ultra-accurate, real-time logistics” and “ultra-early warning systems” to help prevent crises before they spiral.

New UN Governance Structure

In a significant step, the UN General Assembly announced last month the creation of two new entities to guide global AI regulation: an independent scientific panel and a global dialogue forum.

The Scientific Panel, comprised of 40 experts selected through international nominations, will publish annual reports. These will feed into the Global Dialogue on AI Governance, scheduled for Geneva in 2026 and New York in 2027. The UN has described the initiative as the most inclusive global governance framework yet proposed for AI.

See also  Uzbekistan Unveils Major Incentives to Attract €85 Million in AI and Data Infrastructure Investment

“This is by far the world’s most globally inclusive approach to governing AI,” wrote Isabella Wilkinson, a research fellow at Chatham House. She called the move “a symbolic triumph,” though she questioned whether the UN’s slow-moving bureaucracy could keep pace with a technology evolving at breakneck speed.

The UN chief will formally launch the new bodies on Thursday, marking the first occasion when all 193 member states will collectively shape the global AI governance agenda.

A Call for Binding Rules

While Britain, France, and South Korea have hosted international AI summits, none have yielded binding agreements. By contrast, many experts and political leaders have urged the UN to take the lead on a global treaty.

Earlier this year, Nobel Prize winners and senior executives from OpenAI, Google DeepMind, and Anthropic joined European lawmakers in calling for “minimum guardrails” to prevent the most dangerous uses of AI. Signatories included former Irish president Mary Robinson and former Italian prime minister Enrico Letta.

Whether the UN can turn this momentum into enforceable regulation remains uncertain. For now, however, the organization’s new framework signals a growing consensus that AI governance must be addressed at the highest international level.

Tech

Cambridge Index Reveals Global Black Market for Fake Social Media Verifications

Published

on

A new index developed by the University of Cambridge has revealed the scale and affordability of the underground market for fake social media account verifications, raising fresh concerns about online manipulation and digital security. According to researchers, fake verification badges can be purchased for as little as eight cents, enabling the rapid creation of networks that imitate authentic users across major online platforms.

The Cambridge Online Trust and Safety Index (COTSI), launched on Thursday, is described as the first global tool capable of tracking real-time prices for verifying fraudulent accounts. The index monitors more than 500 platforms, including TikTok, Instagram, Amazon, Spotify and Uber. By analysing data from sellers operating across the dark web and black-market channels, the project highlights how accessible and inexpensive these services have become.

Researchers say the low cost of creating fake accounts is contributing to the rise of “bot armies” — large groups of automated or semi-automated profiles designed to mimic genuine human activity. These networks can distort online conversations, amplify misleading content, and promote scams or commercial products. They can also be deployed to influence political messaging, creating an illusion of public support or opposition during major events such as elections or policy debates.

The team behind the index said the findings come at a sensitive time for governments and regulators working to contain misinformation. Many popular platforms have reduced investment in content monitoring during the past two years, while others have introduced programmes that reward users for generating high volumes of engagement. Researchers warn that such incentives may encourage the use of artificially inflated interactions, making fake accounts even more valuable to those seeking influence.

See also  TikTok Launches Crowd-Sourced Fact-Checking Tool ‘Footnotes’ in U.S.

According to Cambridge analysts, the market for fraudulent verification has become highly sophisticated. Sellers offer tiered packages, guaranteeing features such as blue-badge symbols, verified rankings or the appearance of longstanding account history. Prices vary by platform and country, but the index shows that even the most complex packages remain within easy reach for groups attempting to manipulate public debate or carry out coordinated campaigns.

The launch of COTSI marks the first attempt to document these prices on a global scale. By presenting live data on the cost of creating fake identities, researchers hope to give policymakers, technology companies and security agencies a clearer picture of how digital manipulation is evolving. The study’s authors stress that tracking these markets is essential for understanding the risks posed by unauthenticated accounts, particularly during periods of political tension.

The university said the index will be updated regularly and will remain publicly accessible as part of its efforts to strengthen digital transparency worldwide.

Continue Reading

Tech

Experts Question Impact of Australia’s New Social Media Ban for Children Under 16

Published

on

Australia has introduced sweeping restrictions that prevent children under 16 from creating or maintaining accounts on major social media platforms, but experts warn the measures may not significantly change young people’s online behaviour. The restrictions, which took effect on December 10, apply to platforms including Facebook, Instagram, TikTok, Snapchat, YouTube, Twitch, Reddit and X.

Under the new rules, children cannot open accounts, yet they can still access most platforms without logging in—raising questions about how effective the regulations will be in shaping online habits. The eSafety Commissioner says the reforms are intended to shield children from online pressures, addictive design features and content that may harm their health and wellbeing.

Social media companies are required to block underage users through age-assurance tools that rely on facial-age estimation, ID uploads or parental consent. Ahead of the rollout, authorities tested 60 verification systems across 28,500 facial recognition assessments. The results showed that while many tools could distinguish children from adults, accuracy declined among users aged 16 and 17, girls and non-Caucasian users, where estimates could be off by two years or more. Experts say the limitations mean many teenagers may still find ways around the rules.

“How do they know who is 14 or 15 when the kids have all signed up as being 75?” asked Sonia Livingstone, a social psychology professor at the London School of Economics. She warned that misclassifications will be common as platforms attempt to enforce the regulations.

Meta acknowledged the challenge, saying complete accuracy is unlikely without requiring every user to present government ID—something the company argues would raise privacy and security concerns. Users over 16 who lose access by mistake are allowed to appeal.

See also  US Justice Department Sues Uber Over Disability Discrimination Claims

Several platforms have criticised the ban, arguing that it removes teenagers from safer, controlled environments. Meta and Google representatives told Australian lawmakers that logged-in teenage accounts already come with protections that limit contact from unknown users, filter sensitive subjects and disable personalised advertising. Experts say these protections are not always effective, citing studies where new YouTube and TikTok accounts quickly received misogynistic or self-harm-related content.

Analysts expect many teenagers to shift to smaller or lesser-regulated platforms. Apps such as Lemon8, Coverstar and Tango have surged into Australia’s top downloads since the start of December. Messaging apps like WhatsApp, Telegram and Signal—exempt from the ban—have also seen a spike in downloads. Livingstone said teenagers will simply “find alternative spaces,” noting that previous bans in other countries pushed young users to new platforms within days.

Researchers caution that gaming platforms such as Discord and Roblox, also outside the scope of the ban, may become new gathering points for young Australians. Studies will be conducted to assess the long-term impact on mental health and whether the restrictions support or complicate parents’ efforts to regulate screen time.

Experts say it may take several years to determine whether the ban delivers meaningful improvements to children’s wellbeing.

Continue Reading

Tech

OECD Warns of Sharp Rise in Cyberbullying Across Europe

Published

on

Cyberbullying among adolescents has increased across every European country included in a new report by the Organisation for Economic Co-operation and Development (OECD), raising concerns among researchers, educators and child-protection advocates. The findings, part of the OECD’s How’s Life for Children in the Digital Age? report, show that online harassment is now affecting young people in all 29 countries and regions surveyed, with wide disparities between nations.

The data, which covers children aged 11, 13 and 15, reveals rates ranging from 7.5 per cent in Spain to 27.1 per cent in Lithuania. The European average stands at 15.5 per cent. Alongside Lithuania, the countries with the highest levels include Latvia, Poland, England, Hungary, Estonia, Ireland, Scotland, Slovenia, Sweden, Wales, Finland and Denmark. Nations such as Portugal, Greece, France, Germany and Italy recorded lower-than-average levels.

Cyberbullying in the study refers to repeated or intentional harassment online, including hostile messages, posts designed to ridicule, or the sharing of unflattering or inappropriate images without consent. The OECD noted that online abuse often involves a power imbalance and is amplified by the reach of digital platforms.

Experts attribute national differences to a combination of technological access, cultural norms and institutional preparedness. James O’Higgins Norman, UNESCO Chair on Bullying and Cyberbullying at Dublin City University, said variations in smartphone use, internet penetration and dominant social media platforms influence how often young people are exposed to harmful interactions. He added that cultural attitudes toward conflict and aggression, as well as the quality of school-based prevention programmes, shape each country’s experience.

See also  Mary Meeker: AI Is the Fastest Tech Shift in History, Outpacing Even the Internet

Specialists from the European Antibullying Network pointed to digital literacy as a key factor. Countries that teach online safety as part of the school curriculum tend to see better outcomes. They also highlighted broader social and economic inequalities, noting that communities with fewer resources often struggle to support vulnerable children effectively.

The report shows that cyberbullying increased everywhere between the 2017–18 and 2021–22 survey periods. Denmark, Lithuania, Norway, Slovenia, Iceland and the Netherlands recorded jumps of more than five percentage points. The OECD average rose from 12.1 to 15.5 per cent. Researchers say the rise coincided with increased access to smartphones and longer daily screen time among adolescents.

Experts agree that the COVID-19 pandemic accelerated the trend. With schools closed and socialising taking place online, young people spent more time on platforms where conflicts could quickly escalate. Digital environments that offer anonymity and instant communication can weaken empathy and accountability, making hostile behaviour more likely, O’Higgins Norman said. He added that some countries are now reporting signs of stabilisation as in-person schooling has resumed.

Girls are more likely than boys to report being cyberbullied in most countries. Across the OECD sample, the rate is 16.4 per cent for girls and 14.3 per cent for boys. Researchers link this gap to the nature of online interactions, as girls tend to engage more in social-media communication, where relational forms of aggression — such as exclusion or image-based harassment — are more common.

Family structure also plays a significant role. Adolescents living in one-parent households report a cyberbullying rate of 19.8 per cent, compared with 14.1 per cent among those living with two parents. Experts say single parents often face heavier time and financial pressures, reducing their capacity to supervise online activity. Young people in such households may also spend more time online for social connection, increasing exposure to risk.

See also  TikTok Launches Crowd-Sourced Fact-Checking Tool ‘Footnotes’ in U.S.

The OECD’s findings add to growing calls for more comprehensive national strategies, stronger digital-literacy education and support structures that reflect the realities of adolescent online life.

Continue Reading

Trending