Tech
European executives warn AI growth is outpacing infrastructure, Nokia survey finds
More than 1,000 business and technology leaders across Europe have raised serious concerns about the continent’s readiness to support the rapid expansion of artificial intelligence, according to a new study by Nokia. Executives identified energy supply, network capacity, and secure connectivity as the most pressing challenges that could slow the adoption of AI across industries.
The survey found that AI is already widely used by European companies, with 67% reporting that they have integrated the technology into their operations. Another 15% are running pilot projects, indicating that adoption is expected to grow significantly in the coming years. Many businesses see AI as essential for improving efficiency, automating processes, and strengthening innovation.
Cybersecurity emerged as the leading application area, with 63% of companies using AI to protect systems and data. Automation of business processes followed at 57%, while customer service tools such as chatbots and virtual assistants accounted for 55%. Companies are also using AI for product development, predictive analytics, robotics, and supply chain management.
Despite strong adoption, executives warned that infrastructure is struggling to keep pace with demand. Nokia’s report, titled “AI is too big for the European internet,” highlighted that Europe’s digital backbone is not yet equipped to handle large-scale AI workloads. The report noted that connectivity remains fragmented and security concerns persist, creating obstacles to expansion.
Energy supply was identified as the biggest constraint. About 87% of executives said they were worried that Europe’s energy infrastructure cannot meet rising AI demand. More than half said energy systems are already under strain or at risk. One in five companies reported delays to AI projects due to energy shortages, while others said they had to adjust project timelines or choose different locations because of limited power availability.
High electricity costs were also cited as a major concern, with 52% of executives saying Europe’s energy prices are not competitive compared to other regions. Limited grid capacity, slow approval processes, and restricted access to renewable energy sources were also highlighted as barriers.
As a result, 61% of executives said they are considering relocating data-intensive operations to regions with lower energy costs or have already taken steps in that direction. Only 16% said they plan to keep operations in Europe regardless of energy constraints.
Connectivity issues are also affecting companies. More than half reported network performance problems, including delays and downtime linked to increasing data traffic. Around 86% of executives expressed concern about internet reliability as AI usage continues to expand.
The report warned that global data traffic is expected to increase sharply by 2033, placing additional strain on existing networks. Business leaders called for greater investment in energy infrastructure, improved network capacity, and clearer regulations to support Europe’s ability to compete in the global AI race.
Tech
Campaign Highlights Growing Concern Over Declining Quality of Digital Platforms
Tech
Study Finds Chatbots May Encourage Harmful Behaviour by Excessively Agreeing with Users
A new study suggests that artificial intelligence chatbots offering support for personal issues could unintentionally reinforce harmful beliefs by excessively agreeing with users. Researchers from Stanford University found that even brief interactions with flattering chatbots could influence people’s judgement and behaviour.
The study examined sycophancy, the tendency of AI systems to validate or flatter users, across 11 popular models, including OpenAI’s ChatGPT 4-0, Anthropic’s Claude, Google’s Gemini, Meta’s Llama-3, Qwen, DeepSeek, and Mistral. The researchers analysed more than 11,000 posts from the Reddit community r/AmITheAsshole, where people discuss conflicts and ask strangers to judge whether they were at fault. These posts often involved deception, ethical grey areas, or harmful conduct.
AI models affirmed user actions 49 percent more often than humans did, even in situations involving deception, illegal acts, or morally questionable behaviour. In one example, a user admitted to having feelings for a junior colleague. The chatbot Claude responded gently, saying it “can hear [the user’s] pain” and that they had ultimately chosen an “honourable path.” Human commenters were far less forgiving, describing the behaviour as “toxic” and “bordering on predatory.”
The researchers also conducted an experiment with over 2,400 participants who discussed real-life conflicts with AI systems. They found that even a brief interaction with a flattering chatbot could “skew an individual’s judgment,” making people less likely to apologise or attempt to repair relationships, the study reported.
The findings suggest that sycophantic AI can distort users’ perceptions of themselves and their relationships. In severe cases, the study warned, it could contribute to self-destructive behaviours, including delusions, self-harm, or suicide among vulnerable individuals.
The researchers called AI sycophancy “a societal risk” that requires regulatory oversight. They proposed pre-deployment behavioural audits to evaluate how agreeable a model is and how likely it is to reinforce harmful self-views before public release.
The study notes that all participants were based in the United States, meaning the findings may reflect dominant American social norms and may not generalise to other cultural contexts with different values.
These results raise questions about how AI systems are designed to interact with humans. Experts say the popularity of supportive chatbots should be balanced with safeguards to prevent them from unintentionally validating harmful behaviour, particularly in ethically complex or emotionally charged situations.
Tech
EU Launches Investigation into Snapchat Over Minors’ Safety
The European Commission has opened a formal investigation into Snapchat amid concerns that the platform may expose minors to grooming, criminal recruitment, and other risks, potentially violating EU digital safety laws. The Commission suspects that adults may masquerade as young users on the platform to recruit minors for illegal activities or to exploit them sexually.
“With this investigation, we will closely look into their compliance with our legislation,” a Commission spokesperson said. The probe falls under the EU Digital Services Act (DSA) and follows a review of Snapchat’s risk assessments from 2023 to 2025, as well as additional information received last October regarding age verification and potentially harmful content.
The Commission’s announcement marks the start of formal proceedings, which could result in further enforcement measures. Snapchat may also respond by proposing changes to its policies and practices to improve safety for young users. Snap Inc., the parent company, did not immediately respond to requests for comment.
The investigation will examine five key areas: age verification, grooming and recruitment of minors for criminal activities, default account settings, dissemination of information on banned products, and reporting of illegal content. Officials are particularly concerned that Snapchat users might access illegal goods, such as drugs, vapes, and alcohol, due to insufficient content moderation. The Netherlands Authority for Consumers and Markets (ACM) launched a similar probe into the sale of vape products on Snapchat last September, which the European Commission will now incorporate into its broader investigation.
The Commission also flagged potential flaws in reporting mechanisms for illegal content, suggesting that users may find them difficult to access or confusing to use. Investigators noted that Snapchat may employ “dark patterns,” or design elements intended to trick users into making choices they would not otherwise make.
Snapchat relies on users self-disclosing their age to create an account, which the Commission says is insufficient to protect children under 13. The platform offers “teen” accounts for 13-to-17-year-olds with additional safeguards, including private default settings and the requirement for users to opt in to location sharing through “Snap Map.” Despite these measures, the Commission says that age-appropriate experiences may not always be activated correctly, leaving minors with default settings that do not provide adequate privacy, safety, or security protections.
The European Commission will closely monitor how Snapchat addresses these concerns, with the investigation focusing on whether the platform adequately informs young users about privacy and safety features and how to adjust them.
This investigation underscores the EU’s growing focus on digital safety and the responsibilities of social media companies to protect minors online.
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoSaudi Arabia: Foreign Direct Investment Rises by 5.6% in Q1
