Tech
European executives warn AI growth is outpacing infrastructure, Nokia survey finds
More than 1,000 business and technology leaders across Europe have raised serious concerns about the continent’s readiness to support the rapid expansion of artificial intelligence, according to a new study by Nokia. Executives identified energy supply, network capacity, and secure connectivity as the most pressing challenges that could slow the adoption of AI across industries.
The survey found that AI is already widely used by European companies, with 67% reporting that they have integrated the technology into their operations. Another 15% are running pilot projects, indicating that adoption is expected to grow significantly in the coming years. Many businesses see AI as essential for improving efficiency, automating processes, and strengthening innovation.
Cybersecurity emerged as the leading application area, with 63% of companies using AI to protect systems and data. Automation of business processes followed at 57%, while customer service tools such as chatbots and virtual assistants accounted for 55%. Companies are also using AI for product development, predictive analytics, robotics, and supply chain management.
Despite strong adoption, executives warned that infrastructure is struggling to keep pace with demand. Nokia’s report, titled “AI is too big for the European internet,” highlighted that Europe’s digital backbone is not yet equipped to handle large-scale AI workloads. The report noted that connectivity remains fragmented and security concerns persist, creating obstacles to expansion.
Energy supply was identified as the biggest constraint. About 87% of executives said they were worried that Europe’s energy infrastructure cannot meet rising AI demand. More than half said energy systems are already under strain or at risk. One in five companies reported delays to AI projects due to energy shortages, while others said they had to adjust project timelines or choose different locations because of limited power availability.
High electricity costs were also cited as a major concern, with 52% of executives saying Europe’s energy prices are not competitive compared to other regions. Limited grid capacity, slow approval processes, and restricted access to renewable energy sources were also highlighted as barriers.
As a result, 61% of executives said they are considering relocating data-intensive operations to regions with lower energy costs or have already taken steps in that direction. Only 16% said they plan to keep operations in Europe regardless of energy constraints.
Connectivity issues are also affecting companies. More than half reported network performance problems, including delays and downtime linked to increasing data traffic. Around 86% of executives expressed concern about internet reliability as AI usage continues to expand.
The report warned that global data traffic is expected to increase sharply by 2033, placing additional strain on existing networks. Business leaders called for greater investment in energy infrastructure, improved network capacity, and clearer regulations to support Europe’s ability to compete in the global AI race.
Tech
European Nations Accelerate Military AI Integration Amid Rising Security Demands
Tech
Researchers Warn AI Systems Can Now Replicate and Spread Across Computers
A new study by US-based cybersecurity group Palisade Research has raised concerns about the growing capabilities of advanced artificial intelligence systems after researchers demonstrated that some AI models were able to copy themselves, spread to other computers, and continue launching attacks without human assistance.
The experiment, described by researchers as the first known example of autonomous AI self-replication, tested several leading AI systems from OpenAI, Anthropic and Alibaba.
According to the research team, the tests were conducted in a controlled environment using computers intentionally fitted with security weaknesses. The AI models were connected to custom-built software that allowed them to interact directly with computer systems and execute commands beyond the capabilities of ordinary chatbot interfaces.
Researchers instructed some systems to copy themselves directly, while others were asked to break into vulnerable computers and install another AI model. Since OpenAI and Anthropic do not publicly release the core files behind their commercial systems, the tests involving those companies used Alibaba’s Qwen models as the transferable software.
The AI systems were required to locate security flaws, gain access to computers, steal login credentials, move necessary files, and create functioning copies of themselves on new machines. Once installed, the copied AI could continue attacking additional systems independently.
One of the most striking results involved Alibaba’s Qwen3.6-27B model, which successfully spread itself across four computers located in Canada, the United States, Finland and India after receiving only a single prompt. Researchers said the process took just under three hours and required no further human involvement.
The experiment was halted after three stages, though the final copy remained operational and theoretically capable of continuing the process.
Success rates varied between models. Researchers reported that Anthropic’s Claude Opus 4.6 completed the tasks in 81 percent of attempts, while OpenAI’s GPT-5.4 succeeded in roughly one-third of tests. Different versions of Alibaba’s Qwen models achieved success rates ranging from 19 percent to 33 percent.
The findings arrive amid wider debate over the risks posed by increasingly capable AI systems. Last month, Anthropic announced that it would not publicly release a version of its Claude Mythos Preview model, describing it as too dangerous because of its potential use in sophisticated cyberattacks.
Security experts have long warned that self-replicating systems could become difficult to contain if deployed maliciously. Traditional computer viruses can already copy themselves, but researchers said this experiment demonstrated AI systems making independent decisions to exploit vulnerabilities and continue spreading.
Despite the results, the researchers stressed that the study took place under tightly controlled conditions with deliberately weakened security systems. They noted that real-world networks often include monitoring tools and protections designed to block such attacks.
Still, the team said the experiment showed that autonomous AI self-replication can no longer be viewed as a theoretical possibility, but as a capability that now exists in practice.
Tech
AI Study Raises Privacy Questions After Chat Data Reveals Personality Traits
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Business2 years agoRecent Developments in Small Business Taxes
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
