Connect with us

Tech

Meta Funds UK Government AI Fellowship with $1 Million Grant to Build Public Sector Tools

Published

on

Tech giant Meta is backing the UK government’s latest push into artificial intelligence with a $1 million (€854,000) grant to support the development of new AI technologies for public sector use. The funding will launch the “Open-Source AI Fellowship,” a one-year initiative aimed at equipping government departments with advanced AI tools to streamline operations and bolster national security.

Announced by Technology Secretary Peter Kyle, the fellowship will support 10 engineers who will be embedded within the UK government beginning January 2026. Their mission: to develop open-source AI solutions for high-security and high-impact use cases across the public sector.

“This Fellowship is the best of AI in action – open, practical, and built for public good,” Kyle said. “It’s about delivery, not just ideas – creating real tools that help government work better for people.”

Among the potential applications of the fellowship are tools to speed up housing approvals using construction data, improve language translation for national security, and automate document summaries for civil servants. Fellows may also contribute to “Humphrey,” an AI-powered suite currently under development to assist public officials in drafting responses, summarising reports, and managing workload efficiently.

The initiative will be managed by the Alan Turing Institute, the UK’s national centre for data science and AI. Meta’s grant will directly support the fellowship through the Institute, which will place the selected engineers in appropriate departments to co-develop AI tools using Meta’s Llama 3.5 model and other open-source technologies.

All tools developed through the programme will be publicly accessible, reinforcing the government’s commitment to transparency and collaborative innovation.

See also  Creative Industry Groups Raise Alarm as EU’s AI Act Comes into Force

The fellowship builds on ongoing AI pilots in government, including “Caddy,” an open-source AI assistant already in use at Citizens Advice centres. Caddy helps staff answer frequently asked questions on topics such as debt management, legal aid, and consumer rights.

The announcement follows another major tech partnership unveiled this week. The UK government signed an agreement with Google Cloud to train 100,000 civil servants in AI and digital skills by 2030. The programme aims to ensure that at least one in every 10 government officials is a tech specialist.

Together, the fellowship and upskilling initiatives reflect a broader strategy by the UK government to position itself as a leader in AI innovation and digital governance, while enhancing efficiency and responsiveness in the public sector.

Tech

Cambridge Index Reveals Global Black Market for Fake Social Media Verifications

Published

on

A new index developed by the University of Cambridge has revealed the scale and affordability of the underground market for fake social media account verifications, raising fresh concerns about online manipulation and digital security. According to researchers, fake verification badges can be purchased for as little as eight cents, enabling the rapid creation of networks that imitate authentic users across major online platforms.

The Cambridge Online Trust and Safety Index (COTSI), launched on Thursday, is described as the first global tool capable of tracking real-time prices for verifying fraudulent accounts. The index monitors more than 500 platforms, including TikTok, Instagram, Amazon, Spotify and Uber. By analysing data from sellers operating across the dark web and black-market channels, the project highlights how accessible and inexpensive these services have become.

Researchers say the low cost of creating fake accounts is contributing to the rise of “bot armies” — large groups of automated or semi-automated profiles designed to mimic genuine human activity. These networks can distort online conversations, amplify misleading content, and promote scams or commercial products. They can also be deployed to influence political messaging, creating an illusion of public support or opposition during major events such as elections or policy debates.

The team behind the index said the findings come at a sensitive time for governments and regulators working to contain misinformation. Many popular platforms have reduced investment in content monitoring during the past two years, while others have introduced programmes that reward users for generating high volumes of engagement. Researchers warn that such incentives may encourage the use of artificially inflated interactions, making fake accounts even more valuable to those seeking influence.

See also  Trump Likely to Extend TikTok Ban Deadline Amid Broader China Negotiations

According to Cambridge analysts, the market for fraudulent verification has become highly sophisticated. Sellers offer tiered packages, guaranteeing features such as blue-badge symbols, verified rankings or the appearance of longstanding account history. Prices vary by platform and country, but the index shows that even the most complex packages remain within easy reach for groups attempting to manipulate public debate or carry out coordinated campaigns.

The launch of COTSI marks the first attempt to document these prices on a global scale. By presenting live data on the cost of creating fake identities, researchers hope to give policymakers, technology companies and security agencies a clearer picture of how digital manipulation is evolving. The study’s authors stress that tracking these markets is essential for understanding the risks posed by unauthenticated accounts, particularly during periods of political tension.

The university said the index will be updated regularly and will remain publicly accessible as part of its efforts to strengthen digital transparency worldwide.

Continue Reading

Tech

Experts Question Impact of Australia’s New Social Media Ban for Children Under 16

Published

on

Australia has introduced sweeping restrictions that prevent children under 16 from creating or maintaining accounts on major social media platforms, but experts warn the measures may not significantly change young people’s online behaviour. The restrictions, which took effect on December 10, apply to platforms including Facebook, Instagram, TikTok, Snapchat, YouTube, Twitch, Reddit and X.

Under the new rules, children cannot open accounts, yet they can still access most platforms without logging in—raising questions about how effective the regulations will be in shaping online habits. The eSafety Commissioner says the reforms are intended to shield children from online pressures, addictive design features and content that may harm their health and wellbeing.

Social media companies are required to block underage users through age-assurance tools that rely on facial-age estimation, ID uploads or parental consent. Ahead of the rollout, authorities tested 60 verification systems across 28,500 facial recognition assessments. The results showed that while many tools could distinguish children from adults, accuracy declined among users aged 16 and 17, girls and non-Caucasian users, where estimates could be off by two years or more. Experts say the limitations mean many teenagers may still find ways around the rules.

“How do they know who is 14 or 15 when the kids have all signed up as being 75?” asked Sonia Livingstone, a social psychology professor at the London School of Economics. She warned that misclassifications will be common as platforms attempt to enforce the regulations.

Meta acknowledged the challenge, saying complete accuracy is unlikely without requiring every user to present government ID—something the company argues would raise privacy and security concerns. Users over 16 who lose access by mistake are allowed to appeal.

See also  Mary Meeker: AI Is the Fastest Tech Shift in History, Outpacing Even the Internet

Several platforms have criticised the ban, arguing that it removes teenagers from safer, controlled environments. Meta and Google representatives told Australian lawmakers that logged-in teenage accounts already come with protections that limit contact from unknown users, filter sensitive subjects and disable personalised advertising. Experts say these protections are not always effective, citing studies where new YouTube and TikTok accounts quickly received misogynistic or self-harm-related content.

Analysts expect many teenagers to shift to smaller or lesser-regulated platforms. Apps such as Lemon8, Coverstar and Tango have surged into Australia’s top downloads since the start of December. Messaging apps like WhatsApp, Telegram and Signal—exempt from the ban—have also seen a spike in downloads. Livingstone said teenagers will simply “find alternative spaces,” noting that previous bans in other countries pushed young users to new platforms within days.

Researchers caution that gaming platforms such as Discord and Roblox, also outside the scope of the ban, may become new gathering points for young Australians. Studies will be conducted to assess the long-term impact on mental health and whether the restrictions support or complicate parents’ efforts to regulate screen time.

Experts say it may take several years to determine whether the ban delivers meaningful improvements to children’s wellbeing.

Continue Reading

Tech

OECD Warns of Sharp Rise in Cyberbullying Across Europe

Published

on

Cyberbullying among adolescents has increased across every European country included in a new report by the Organisation for Economic Co-operation and Development (OECD), raising concerns among researchers, educators and child-protection advocates. The findings, part of the OECD’s How’s Life for Children in the Digital Age? report, show that online harassment is now affecting young people in all 29 countries and regions surveyed, with wide disparities between nations.

The data, which covers children aged 11, 13 and 15, reveals rates ranging from 7.5 per cent in Spain to 27.1 per cent in Lithuania. The European average stands at 15.5 per cent. Alongside Lithuania, the countries with the highest levels include Latvia, Poland, England, Hungary, Estonia, Ireland, Scotland, Slovenia, Sweden, Wales, Finland and Denmark. Nations such as Portugal, Greece, France, Germany and Italy recorded lower-than-average levels.

Cyberbullying in the study refers to repeated or intentional harassment online, including hostile messages, posts designed to ridicule, or the sharing of unflattering or inappropriate images without consent. The OECD noted that online abuse often involves a power imbalance and is amplified by the reach of digital platforms.

Experts attribute national differences to a combination of technological access, cultural norms and institutional preparedness. James O’Higgins Norman, UNESCO Chair on Bullying and Cyberbullying at Dublin City University, said variations in smartphone use, internet penetration and dominant social media platforms influence how often young people are exposed to harmful interactions. He added that cultural attitudes toward conflict and aggression, as well as the quality of school-based prevention programmes, shape each country’s experience.

See also  Trump Likely to Extend TikTok Ban Deadline Amid Broader China Negotiations

Specialists from the European Antibullying Network pointed to digital literacy as a key factor. Countries that teach online safety as part of the school curriculum tend to see better outcomes. They also highlighted broader social and economic inequalities, noting that communities with fewer resources often struggle to support vulnerable children effectively.

The report shows that cyberbullying increased everywhere between the 2017–18 and 2021–22 survey periods. Denmark, Lithuania, Norway, Slovenia, Iceland and the Netherlands recorded jumps of more than five percentage points. The OECD average rose from 12.1 to 15.5 per cent. Researchers say the rise coincided with increased access to smartphones and longer daily screen time among adolescents.

Experts agree that the COVID-19 pandemic accelerated the trend. With schools closed and socialising taking place online, young people spent more time on platforms where conflicts could quickly escalate. Digital environments that offer anonymity and instant communication can weaken empathy and accountability, making hostile behaviour more likely, O’Higgins Norman said. He added that some countries are now reporting signs of stabilisation as in-person schooling has resumed.

Girls are more likely than boys to report being cyberbullied in most countries. Across the OECD sample, the rate is 16.4 per cent for girls and 14.3 per cent for boys. Researchers link this gap to the nature of online interactions, as girls tend to engage more in social-media communication, where relational forms of aggression — such as exclusion or image-based harassment — are more common.

Family structure also plays a significant role. Adolescents living in one-parent households report a cyberbullying rate of 19.8 per cent, compared with 14.1 per cent among those living with two parents. Experts say single parents often face heavier time and financial pressures, reducing their capacity to supervise online activity. Young people in such households may also spend more time online for social connection, increasing exposure to risk.

See also  Creative Industry Groups Raise Alarm as EU’s AI Act Comes into Force

The OECD’s findings add to growing calls for more comprehensive national strategies, stronger digital-literacy education and support structures that reflect the realities of adolescent online life.

Continue Reading

Trending