Connect with us

Tech

Campaign Highlights Growing Concern Over Declining Quality of Digital Platforms

Published

on

A viral campaign led by the Norwegian Consumer Council has sparked global debate over what critics describe as the steady decline in the quality of popular digital platforms.

A widely shared video produced by the group features a self-described “professional enshitificator” adding intrusive pop-ups to websites, inserting extra advertisements into YouTube videos and triggering disruptive software updates. The video, which has drawn millions of views, is part of a broader effort to highlight the concept known as “enshitification.”

A platform becomes ‘enshitified’ when it introduces paid features or subscriptions that makes a user’s experience worse than it used to be. The term was first coined in 2023 by journalist Cory Doctorow, who argued that digital services often begin by prioritising users before gradually shifting toward profit-driven practices that degrade the experience.

According to the Norwegian Consumer Council, this trend is increasingly visible across major platforms. Over 70 advocacy groups from the United States, Europe and Norway have written to policymakers in more than 14 countries, urging stronger action to protect consumers and curb what they describe as anti-competitive behaviour.

The group’s analysis points to platforms such as Facebook as examples of how services evolve. Originally designed to connect friends and family, the platform now prioritises advertising and promoted content, often interrupting user activity with sponsored posts and algorithm-driven material.

Experts say the problem is tied to how digital markets operate. Finn Lützow-Holm Myrstad, the council’s director of digital policy, said companies are able to introduce these changes because users have limited alternatives. “It’s a deliberate process,” he said, noting that once users are locked into a platform, switching becomes difficult.

See also  Trump Says Nvidia’s Most Advanced AI Chips Will Be Reserved for U.S. Companies

Economists highlight the role of the “network effect,” where a platform becomes more valuable as more people use it. This makes users reluctant to leave, even if the service declines. At the same time, companies introduce switching costs, such as data loss or the effort required to rebuild connections elsewhere, further discouraging migration.

Industry analysts also point to reduced competition following major acquisitions, including Meta Platforms’ purchase of Instagram, as a factor that has allowed platforms to prioritise revenue over user experience.

Regulators in Europe have introduced measures aimed at addressing these concerns. The Digital Markets Act seeks to open up dominant platforms to competition, while the Digital Services Act requires companies to assess risks and improve transparency. However, experts warn that enforcement has been slow and penalties insufficient to deter harmful practices.

Advocates are now calling for stronger rules, including proposed legislation such as the Digital Fairness Act, to address deceptive design and addictive features.

While digital platforms remain central to communication, commerce and entertainment, the campaign underscores growing frustration among users and calls for a shift toward services that prioritise transparency, competition and consumer rights.

Tech

Hackers Breach Access to Anthropic’s Restricted AI Model “Mythos”

Published

on

A group of unauthorised users has reportedly gained access to a highly restricted artificial intelligence system developed by Anthropic, raising fresh concerns about the security of advanced AI technologies.

The system, known as Mythos, has been described by the company as too sensitive for public release due to what it calls “unprecedented cybersecurity risks.” Designed primarily for enterprise-level security applications, the model is currently being tested by a limited number of technology firms and financial institutions.

According to reports, access to Mythos was obtained through a third-party vendor connected to Anthropic. A private online forum is believed to have exploited this route, allowing users to interact with the system despite strict access controls. Sources cited in the report said the group attempted multiple strategies before successfully gaining entry and has continued to use the model after breaching it.

Anthropic acknowledged it is investigating the claims but said there is no evidence so far that its internal systems have been directly compromised. A company spokesperson indicated that the situation remains under review as more details are gathered.

Mythos is part of Anthropic’s broader initiative, known as Project Glasswing, which aims to develop advanced AI tools capable of identifying and addressing cybersecurity vulnerabilities. Due to the model’s capabilities, access has been limited to a select group of partners, including major technology companies and financial institutions.

Reports indicate that firms such as Amazon, Apple and JPMorgan Chase are among those involved in testing the system. Other banking giants, including Goldman Sachs, Citigroup, Bank of America and Morgan Stanley, are also said to be evaluating its potential use in detecting weaknesses in digital infrastructure.

See also  AI Voice Scam Impersonates Top US Official, Raises Alarm Over Emerging Cyber Threats

The issue has drawn attention at high levels of government and industry. Earlier this month, US Treasury Secretary Scott Bessent reportedly convened a meeting with senior banking executives in Washington to discuss the implications of advanced AI systems like Mythos. Participants were encouraged to explore how such tools could strengthen cybersecurity frameworks, particularly in the financial sector.

The reported breach highlights the growing challenge of securing cutting-edge AI systems as they become more powerful and more widely deployed. Experts have warned that even limited leaks or unauthorised access could expose sensitive capabilities, potentially allowing malicious actors to exploit vulnerabilities or replicate advanced techniques.

Anthropic has not confirmed the extent of the access gained or whether any sensitive outputs were extracted. The company has also not responded to additional requests for comment at the time of publication.

As investigations continue, the incident is likely to intensify scrutiny over how AI developers safeguard their most advanced systems, especially those designed to operate in high-risk environments such as cybersecurity and finance.

Continue Reading

Tech

Palantir Manifesto Sparks Backlash Over AI Weapons and Cultural Claims

Published

on

A controversial online post by Palantir Technologies has triggered widespread criticism after the firm outlined views on artificial intelligence, national service, and global cultural differences, prompting concern from politicians and analysts.

The post, shared on X over the weekend, has been described as a 22-point manifesto summarising ideas from the book The Technological Republic, written by company chief executive Alex Karp and head of corporate affairs Nicholas Zamiska. While framed by the company as a brief overview, its content has drawn sharp reactions for its tone and proposals.

Among the most contentious statements was a claim that some cultures have contributed major advancements while others remain “dysfunctional and regressive.” The post also called for renewed emphasis on national service and suggested that technology firms have a moral responsibility to support defence initiatives.

Critics were quick to respond. Yanis Varoufakis warned that the message pointed toward a future shaped by “AI-powered killer robots,” highlighting concerns over the growing role of autonomous weapons. In the United Kingdom, Victoria Collins described the manifesto as resembling “the ramblings of a supervillain,” questioning whether companies with such views should be involved in public sector work.

The document also suggested rethinking post-war geopolitical arrangements, including what it described as restrictions placed on countries such as Germany and Japan after World War II. It further encouraged a greater role for religion in public life, adding to the debate around the company’s broader ideological stance.

Industry observers note that Palantir Technologies is not an ordinary tech firm. Founded in 2003 by Alex Karp and billionaire investor Peter Thiel, the company provides data analytics software to governments, military agencies, and law enforcement bodies worldwide. Its contracts include work with the US military and the UK’s National Health Service, placing it at the intersection of technology, security, and public policy.

See also  Hacker Group Accesses Data of Over 200 Million Pornhub Users

Eliot Higgins, head of the investigative platform Bellingcat, said the manifesto should be viewed in the context of the company’s business model. He argued that the ideas outlined are not abstract philosophy but reflect the outlook of a firm whose revenue is tied to defence, intelligence, and policing.

The debate comes at a time when artificial intelligence is rapidly reshaping industries and raising ethical questions about its use in warfare and governance. Palantir’s post suggests that the development of AI-driven weapons is inevitable, framing the issue as a matter of who controls the technology rather than whether it should exist.

The backlash highlights growing unease over the influence of private technology companies in shaping policies that extend beyond commercial innovation into global security and societal values.

Continue Reading

Tech

Siemens and Nvidia Test Humanoid Robot on Factory Floor in Push for AI-Driven Production

Published

on

German engineering group Siemens and US chipmaker Nvidia have carried out a live factory trial of a humanoid robot, marking a step toward integrating artificial intelligence into industrial production.

The test took place at Siemens’ electronics plant in Erlangen, where a robot developed by UK-based Humanoid was deployed to perform logistics tasks alongside human workers. The machine, known as HMND 01, was designed to handle routine operations such as picking up, transporting and placing containers used in daily factory processes.

According to Siemens, the robot operated autonomously for more than eight hours during the trial and successfully completed over 90 per cent of its assigned tasks. It handled around 60 containers per hour, demonstrating the potential for consistent performance in a real industrial environment.

The project forms part of a broader collaboration between Siemens and Nvidia aimed at developing what they describe as the world’s first fully AI-driven adaptive factories. The goal is to create production environments where machines can work alongside people, responding to changes and making decisions in real time.

Executives involved in the project said the trial highlights advances in “physical AI”, a concept that enables machines to perceive their surroundings, process information and adjust their actions without direct human control. Nvidia provided the underlying artificial intelligence technology, including simulation tools and real-time processing systems, while Siemens handled industrial integration.

The companies said much of the robot’s development was completed through virtual simulations before deployment. This approach significantly reduced the time required for testing and design, cutting development cycles from as long as two years to roughly seven months.

See also  Microsoft Unveils ‘Mico’: A Friendly New Face for Copilot Assistant

Industry observers say such systems could help manufacturers address labour shortages and improve efficiency in areas where traditional automation has struggled. Tasks that require flexibility, movement and interaction with human workers have historically been difficult for machines to handle, but advances in AI are beginning to close that gap.

While the trial is being described as a milestone, Siemens and Nvidia have not provided a timeline for large-scale adoption of humanoid robots in factories. Questions remain around cost, scalability and safety before the technology can be rolled out more widely.

Even so, the demonstration offers a glimpse into how manufacturing could evolve, with intelligent machines taking on more complex roles while working in coordination with human staff on the factory floor.

Continue Reading

Trending