Tech
Nearly Half of Europeans Support Banning Social Media Platform X Over EU Rule Breaches
A new survey across Germany, France, Spain, Italy, and Poland shows that nearly half of Europeans would support banning social media platform X from the European Union if it continues to break EU rules. Conducted by YouGov, the polling highlights rising frustration among EU citizens over what they perceive as the platform’s failure to comply with European digital regulations.
The survey found that between 60 and 78 percent of respondents in each country believe the EU should take stronger action against X if it does not address breaches identified by the European Commission last year. Of those in favour of further measures, a majority—ranging from 62 to 73 percent—said the platform should be banned if it refuses to comply. Overall, 47 percent of respondents backed a potential ban.
The European Commission fined X €120 million in December under the Digital Services Act (DSA) for failing to meet transparency obligations. Central to the investigation is the blue checkmark system, previously free to verify official accounts but now sold for €7 a month, which could mislead users about account authenticity. The Commission also found the platform did not meet transparency requirements for advertising, raising concerns that users could be exposed to financial scams. X has 90 working days to respond to the Commission’s findings.
Since the fine, the platform and its built-in AI assistant, Grok, have faced additional scrutiny. Critics argue that X amplifies harmful content, including deepfake pornography and child sexual abuse material. French prosecutors recently raided X’s Paris office as part of an ongoing investigation into child abuse content.
The YouGov survey indicates strong public support for tougher enforcement against large tech platforms. If X fails to comply with the Commission’s ruling, 70 percent of respondents said they would support consequences. Among these, 17 to 28 percent favoured further fines, 23 to 29 percent supported banning the platform outright, and the largest group—40 to 52 percent—wanted a combination of fines and a ban.
Ava Lee, executive director of People vs Big Tech, said the data shows Europeans are “done with empty warnings.” She added that X could set a precedent for how the EU enforces its rules on major technology companies.
Despite public support for tougher measures, banning a major social media platform would be considered an extreme step under EU law. The Commission has not indicated that it is currently considering such a move.
The survey comes amid wider debates in Europe over social media regulation. Several countries, including Spain, France, Italy, Germany, and the United Kingdom, are considering restrictions or outright bans on social media for minors, citing concerns over illegal or harmful content. Australia has already implemented strict rules for users under 16, but experts caution that enforcement challenges mean it is too early to judge the effectiveness of such bans.
Professor Kathryn Modecki from the University of Western Australia noted that many children continue to access banned apps through simple workarounds, suggesting policymakers should monitor results carefully before expanding similar restrictions elsewhere.
Tech
Researchers Warn AI Systems Can Now Replicate and Spread Across Computers
A new study by US-based cybersecurity group Palisade Research has raised concerns about the growing capabilities of advanced artificial intelligence systems after researchers demonstrated that some AI models were able to copy themselves, spread to other computers, and continue launching attacks without human assistance.
The experiment, described by researchers as the first known example of autonomous AI self-replication, tested several leading AI systems from OpenAI, Anthropic and Alibaba.
According to the research team, the tests were conducted in a controlled environment using computers intentionally fitted with security weaknesses. The AI models were connected to custom-built software that allowed them to interact directly with computer systems and execute commands beyond the capabilities of ordinary chatbot interfaces.
Researchers instructed some systems to copy themselves directly, while others were asked to break into vulnerable computers and install another AI model. Since OpenAI and Anthropic do not publicly release the core files behind their commercial systems, the tests involving those companies used Alibaba’s Qwen models as the transferable software.
The AI systems were required to locate security flaws, gain access to computers, steal login credentials, move necessary files, and create functioning copies of themselves on new machines. Once installed, the copied AI could continue attacking additional systems independently.
One of the most striking results involved Alibaba’s Qwen3.6-27B model, which successfully spread itself across four computers located in Canada, the United States, Finland and India after receiving only a single prompt. Researchers said the process took just under three hours and required no further human involvement.
The experiment was halted after three stages, though the final copy remained operational and theoretically capable of continuing the process.
Success rates varied between models. Researchers reported that Anthropic’s Claude Opus 4.6 completed the tasks in 81 percent of attempts, while OpenAI’s GPT-5.4 succeeded in roughly one-third of tests. Different versions of Alibaba’s Qwen models achieved success rates ranging from 19 percent to 33 percent.
The findings arrive amid wider debate over the risks posed by increasingly capable AI systems. Last month, Anthropic announced that it would not publicly release a version of its Claude Mythos Preview model, describing it as too dangerous because of its potential use in sophisticated cyberattacks.
Security experts have long warned that self-replicating systems could become difficult to contain if deployed maliciously. Traditional computer viruses can already copy themselves, but researchers said this experiment demonstrated AI systems making independent decisions to exploit vulnerabilities and continue spreading.
Despite the results, the researchers stressed that the study took place under tightly controlled conditions with deliberately weakened security systems. They noted that real-world networks often include monitoring tools and protections designed to block such attacks.
Still, the team said the experiment showed that autonomous AI self-replication can no longer be viewed as a theoretical possibility, but as a capability that now exists in practice.
Tech
AI Study Raises Privacy Questions After Chat Data Reveals Personality Traits
Tech
Zuckerberg and Chan Commit $500 Million to AI Project Aimed at Mapping Human Cells
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
