Connect with us

Tech

Wikipedia Challenges UK Online Safety Regulations Over Volunteer Privacy Concerns

Published

on

The Wikimedia Foundation, the non-profit organisation behind Wikipedia, is set to appear before London’s Royal Courts of Justice on July 22 to contest the potential classification of the popular online encyclopedia under the UK’s Online Safety Act (OSA) as a “Category 1” service. The foundation argues that such a designation could severely impact the privacy, safety, and operations of its global community of volunteer contributors.

Under the OSA, Category 1 platforms—considered high-risk due to their scale and features—face extensive regulatory obligations, including user verification and stricter content moderation. The Wikimedia Foundation warns that enforcing such rules on Wikipedia would require it to identify thousands of its UK-based contributors, thereby compromising the anonymity that has been central to the platform’s functioning and editorial integrity.

In a statement released ahead of the hearing, the foundation said that complying with these rules could expose volunteers to risks such as data breaches, harassment, lawsuits, or even persecution in countries with repressive regimes. “This legal challenge is about protecting public interest projects online,” said Stephen LaPorte, General Counsel at the Wikimedia Foundation. “If the court rules in our favour, it could set a global precedent for safeguarding privacy and volunteer-led digital communities.”

The court case specifically targets a set of provisions known as the Categorisation Regulations, rather than the entirety of the Online Safety Act. These rules determine which services qualify as Category 1 and thus fall under the strictest oversight. Wikipedia’s massive traffic—estimated at over 11 billion global views monthly, including around 844 million from UK users—places it well within the threshold for designation.

See also  Privacy Concerns May Hinder AI Adoption in European Homes, Samsung Research Finds

Phil Bradley-Schmieg, Wikimedia’s lead counsel, acknowledged the importance of online safety regulation but emphasised that the current framework fails to distinguish between social media platforms and public interest projects like Wikipedia. “These regulations threaten to undermine Wikipedia’s open model by imposing burdensome verification and moderation requirements, which are incompatible with how our community operates,” he said.

The foundation also expressed concern over how the law could inadvertently hinder its algorithm-based tools—such as translation recommendations and the New Pages Feed—which are designed to improve content quality and moderation. Wikimedia contends that these features could be mistakenly interpreted as content recommendation systems under the OSA, making them subject to regulation despite their benign intent.

As the UK seeks to lead in regulating the digital landscape, the outcome of this case could have broader implications for how public interest websites are treated under new internet safety laws.

Tech

Study Finds AI Use May Weaken Basic Problem-Solving Skills

Published

on

Continue Reading

Tech

Meta Launches Muse Spark, Its First Major AI Model in Nine Months

Published

on

Meta has unveiled its first major AI model in nine months, following a $14.3 billion (€12.24 billion) investment spree and executive hiring push to rival OpenAI and Google. The American tech company introduced the model, called Muse Spark, on Wednesday, claiming it is faster and smarter than its previous technologies.

The company, founded by Mark Zuckerberg, invested $14.3 billion in Scale AI in June 2025 and recruited its CEO and co-founder, Alexandr Wang, to oversee Meta Superintelligence Labs, which houses teams working on foundational AI models. Zuckerberg also embarked on a hiring campaign, bringing in executives from competitors including OpenAI, Anthropic, and Google.

In a blog post, Meta said, “Over the last nine months, Meta Superintelligence Labs rebuilt our AI stack from the ground up, moving faster than any development cycle we have run before. This initial model is small and fast by design, yet capable enough to reason through complex questions in science, math, and health. It is a powerful foundation, and the next generation is already in development.”

Muse Spark is positioned as a significant upgrade over Meta’s last major release, Llama 4, launched in April 2025. The company highlighted that the model excels in advanced reasoning, particularly in scientific, mathematical, and medical queries. To improve its health advice capabilities, Meta worked with over 1,000 physicians to curate training data, aiming for more accurate and comprehensive responses.

The AI model will power the company’s digital assistant in the Meta AI app and website, with planned integration across Facebook, Instagram, WhatsApp, Messenger, and the Ray-Ban Meta AI glasses. A “contemplating mode” will gradually roll out, allowing multiple AI agents to reason in parallel on complex tasks. Meta’s technical blog noted this feature is designed to compete with high-level reasoning in models such as Gemini Deep Think and GPT Pro.

See also  Privacy Concerns May Hinder AI Adoption in European Homes, Samsung Research Finds

Zuckerberg emphasized on social media that Meta aims to build AI products that “don’t just answer your questions but act as agents that do things for you.” Unlike conventional chatbots, these AI agents operate autonomously, gathering information based on user preferences to assist without direct human commands.

One notable shift for Meta is the move away from open-source AI models. Unlike earlier releases, Muse Spark is not available for public download, meaning access to the technology is currently restricted. The company said the model is initially available only in the United States.

Muse Spark underscores Meta’s aggressive push into the competitive AI market, combining extensive investment, executive recruitment, and technical innovation to challenge the dominance of established players like OpenAI and Google.

Continue Reading

Tech

OpenAI Urges Governments to Rethink Economy as AI Growth Accelerates

Published

on

OpenAI has called on governments to rethink the foundations of the economy, warning that artificial intelligence (AI) could soon surpass human intelligence and drastically change how people work, live, and pay taxes. The company outlined its initial policy ideas on Monday, aimed at mitigating the economic disruption caused by rapid AI adoption in the United States and worldwide.

One key proposal is the creation of a public wealth fund that would give citizens a direct stake in AI-driven economic growth. According to the policy document, the fund could invest in diversified, long-term assets, including AI companies and broader firms adopting AI technologies, with returns distributed to all citizens.

The company also suggested that governments encourage businesses to launch four-day workweek pilot programs without any reduction in pay. This approach aims to balance the productivity gains provided by AI with the well-being of workers. Lawmakers are also urged to modernize tax systems by increasing taxation on corporate income and capital gains instead of labor income, which could be affected by AI-related job losses. The report proposes additional measures, such as taxing companies that replace human labor with automation.

OpenAI recommends that social benefits, including retirement pensions and healthcare, be provided through portable accounts that follow individuals across different jobs, industries, and entrepreneurial ventures. This model would help ensure continuity of support in a labor market increasingly influenced by AI.

These recommendations echo broader discussions among AI leaders about the future of work. OpenAI CEO Sam Altman and xAI’s Elon Musk have previously highlighted universal basic income as a potential necessity as traditional employment declines. Other tech leaders, including Nvidia’s Jensen Huang and Zoom’s Eric Yuan, have advocated shorter workweeks to distribute productivity gains from AI more evenly.

See also  Generative AI Adoption Varies Widely Across Europe, Survey Finds

Concerns about AI’s long-term impact extend beyond economics. In January, Anthropic CEO Dario Amodei warned that superintelligent AI, capable of outpacing human decision-making, poses “existential danger.” He suggested tighter controls on the export of key technologies, such as semiconductor chips used to train large language models, as one way to manage the risk. Amodei also called for transparency laws requiring AI companies to disclose how they guide their models’ behavior.

OpenAI’s policy document represents an early step in urging governments to address the structural changes AI may bring. The proposals highlight the need to rethink traditional concepts of work, taxation, and social support as the technology continues to advance rapidly.

As AI continues to reshape global economies, policymakers and industry leaders face increasing pressure to develop strategies that protect citizens while fostering innovation and sustainable growth.

Continue Reading

Trending