Tech
AI Shifts Job Prospects for Young Workers in US — Europe Watches Closely
Early evidence from the United States suggests artificial intelligence (AI) is reshaping the job market for young workers, with entry-level roles in software engineering, customer service, and marketing already showing signs of decline. A Stanford University study, Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence, found that employees aged 22 to 25 are increasingly being displaced from AI-vulnerable positions and turning instead to fields like nursing, retail, and industrial labour.
The report provides “early, large-scale evidence” that the AI revolution is beginning to have a disproportionate impact on younger workers in the American labour market. But experts say it is too soon to draw similar conclusions for Europe.
According to labour market specialists at the European Centre for the Development of Vocational Training (CEDEFOP), Europe still faces a chronic shortage in vocational jobs such as construction and manufacturing, a trend that long predates the rise of AI.
“Cognitive skills, the ability to process social context — these remain human advantages,” said Adam Tsakalidis, a skills intelligence expert at CEDEFOP. His analysis of online job vacancies across the EU shows employers increasingly demand AI skills not only for roles like AI engineering but also for professions at risk of automation, such as writing and translation. Companies, he noted, are searching for “focused experts” who can offer value beyond what machines deliver.
CEDEFOP’s long-term forecasts still predict rising demand for digital roles through 2035, even as automation advances. Employers are also seeking a balance of technical and human capabilities. “Problem-solving, teamwork and communication will remain critical alongside AI competencies,” said CEDEFOP labour market expert Konstantinos Pouliakas.
Yet uncertainty remains. Some professions could become fully automated by the next decade, though which ones are hardest to predict. Ulrich Zierahn-Weilage, associate professor of economics at Utrecht University, said history shows highly skilled workers tend to adapt successfully to technological disruption. “You still need the human that has critical thinking, while the machine helps you get the dirty work done more quickly,” he explained.
A separate CEDEFOP survey this year revealed that four in ten EU workers believe they need AI-related training, but only 15 percent have pursued it. Employers, meanwhile, highlight effective use of AI tools, critical thinking, and cybersecurity as top future skills, according to a Bosch study spanning seven countries.
Efforts to close this gap vary across Europe. Spain has launched a national AI agency, while Poland has partnered with Google to provide vocational AI training in cybersecurity and energy. CEDEFOP expert Anastasia Pouliou said more flexible, industry-specific training courses will be essential. “Never stop learning,” she advised. “With AI, you need to be aware, be informed, and keep on being trained.”
Tech
Researchers Warn AI Systems Can Now Replicate and Spread Across Computers
A new study by US-based cybersecurity group Palisade Research has raised concerns about the growing capabilities of advanced artificial intelligence systems after researchers demonstrated that some AI models were able to copy themselves, spread to other computers, and continue launching attacks without human assistance.
The experiment, described by researchers as the first known example of autonomous AI self-replication, tested several leading AI systems from OpenAI, Anthropic and Alibaba.
According to the research team, the tests were conducted in a controlled environment using computers intentionally fitted with security weaknesses. The AI models were connected to custom-built software that allowed them to interact directly with computer systems and execute commands beyond the capabilities of ordinary chatbot interfaces.
Researchers instructed some systems to copy themselves directly, while others were asked to break into vulnerable computers and install another AI model. Since OpenAI and Anthropic do not publicly release the core files behind their commercial systems, the tests involving those companies used Alibaba’s Qwen models as the transferable software.
The AI systems were required to locate security flaws, gain access to computers, steal login credentials, move necessary files, and create functioning copies of themselves on new machines. Once installed, the copied AI could continue attacking additional systems independently.
One of the most striking results involved Alibaba’s Qwen3.6-27B model, which successfully spread itself across four computers located in Canada, the United States, Finland and India after receiving only a single prompt. Researchers said the process took just under three hours and required no further human involvement.
The experiment was halted after three stages, though the final copy remained operational and theoretically capable of continuing the process.
Success rates varied between models. Researchers reported that Anthropic’s Claude Opus 4.6 completed the tasks in 81 percent of attempts, while OpenAI’s GPT-5.4 succeeded in roughly one-third of tests. Different versions of Alibaba’s Qwen models achieved success rates ranging from 19 percent to 33 percent.
The findings arrive amid wider debate over the risks posed by increasingly capable AI systems. Last month, Anthropic announced that it would not publicly release a version of its Claude Mythos Preview model, describing it as too dangerous because of its potential use in sophisticated cyberattacks.
Security experts have long warned that self-replicating systems could become difficult to contain if deployed maliciously. Traditional computer viruses can already copy themselves, but researchers said this experiment demonstrated AI systems making independent decisions to exploit vulnerabilities and continue spreading.
Despite the results, the researchers stressed that the study took place under tightly controlled conditions with deliberately weakened security systems. They noted that real-world networks often include monitoring tools and protections designed to block such attacks.
Still, the team said the experiment showed that autonomous AI self-replication can no longer be viewed as a theoretical possibility, but as a capability that now exists in practice.
Tech
AI Study Raises Privacy Questions After Chat Data Reveals Personality Traits
Tech
Zuckerberg and Chan Commit $500 Million to AI Project Aimed at Mapping Human Cells
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
