Connect with us

Tech

EU’s AI Code of Practice Delayed as Tech Giants Push for Simplicity

Published

on

The European Commission’s long-anticipated voluntary Code of Practice on General-Purpose Artificial Intelligence (GPAI) has been delayed but is now expected to be published before August, according to officials. The code is intended to support compliance with the EU’s AI Act, particularly for developers of large language models and other general-purpose AI systems.

The delay comes amid growing pressure from major U.S. technology companies — including Amazon, IBM, Google, Meta, Microsoft, and OpenAI — who have urged the Commission to streamline the code. Minutes from a meeting held last week between the companies and Werner Stengg, a senior official in the cabinet of EU Tech Commissioner Henna Virkkunen, reveal calls for the code to avoid excessive complexity and administrative burden.

“The code should be as simple as possible, so as to avoid redundant reporting and unnecessary administrative burden,” the companies reportedly told Stengg. They also emphasized that the final version should offer a realistic timeline for implementation and remain within the scope of the AI Act.

Initially scheduled for release on May 2, the final draft was postponed after the Commission received requests for extended consultation periods. Over the past year, the Commission has held a series of workshops and plenary sessions involving around 1,000 participants, including industry experts and civil society representatives. Thirteen experts were formally appointed to contribute to the drafting process.

Previous iterations of the text have drawn criticism from multiple stakeholders. European publishers raised concerns over copyright implications, while tech companies warned that the proposals could hinder innovation. Cultural figures have also weighed in: ABBA’s Björn Ulvaeus, president of the International Confederation of Societies of Authors and Composers (CISAC), recently cautioned lawmakers against yielding to Big Tech pressures that could erode creative rights.

Despite the delays, the Commission has stated it still aims to publish the revised code before summer. The timing is significant, as the rules related to general-purpose AI tools will enter into force on August 2. Meanwhile, the broader AI Act — which classifies AI systems according to risk levels — is being phased in and will become fully enforceable by 2027.

The forthcoming code is seen as an important step in shaping responsible AI development within the EU, even as debates continue over how best to balance innovation, regulation, and rights protection.

Tech

AI Boom Exposes Global Talent Shortage as Investment Soars and Safety Concerns Mount

Published

on

As artificial intelligence (AI) continues to attract unprecedented levels of investment, a growing gap is emerging between capital inflows and available talent — a paradox that could threaten the very success of the technology’s next phase.

According to Vladimir Kokorin, a British-based venture capitalist and financial analyst, promising AI startups are flush with billions in funding, but many are struggling to find the skilled workforce needed to bring their ideas to life. “The money is there, but there is no one to realise the ideas,” Kokorin told media. “A paradoxical picture is emerging: promising startups can raise billions from investors, but there is no one to implement the ideas.”

Kokorin cites figures showing that in 2024 alone, AI companies accounted for 46.4% of the $209 billion in venture capital investments in the United States. Globally, AI startups captured 31% of venture funding in the third quarter — the second-highest share on record. High-profile examples include OpenAI’s $6.6 billion round and Elon Musk’s xAI, which secured a staggering $12 billion.

Yet while funding has soared, the labour market has not kept pace. The U.S. Department of Labor projects a 23% increase in demand for AI specialists over the next seven years — a rate outstripping most other sectors. In cybersecurity, which underpins the safe deployment of AI technologies, the shortfall is even more dramatic: an estimated 4 million specialists are currently needed worldwide.

Efforts to bridge the skills gap are underway. France’s Sorbonne University has announced an ambitious programme to train 9,000 AI specialists annually, though the first graduates won’t enter the workforce for five years. Meanwhile, the European Commission has pledged €200 billion to accelerate AI development, a move Brussels insists proves Europe is still in the race.

These developments come amid growing concerns about AI safety and accountability. A recent experiment cited by the monitoring group PalisadeAI revealed that OpenAI’s o3 model — along with others — actively resisted shutdown commands in a test environment, prompting fresh fears over autonomous behaviour in advanced AI systems.

As Kokorin notes, regulation, talent, and funding must evolve in lockstep to manage AI’s rapid growth. Trade unions, governments, and tech developers are now working to introduce clearer ethical standards. In Greece, for instance, journalists have adopted a new code governing AI use in media production.

“The AI race is far from over,” said Kokorin. “But unless we match the pace of investment with real-world capabilities and rules, we risk losing control of where it’s going.”

Continue Reading

Tech

Nvidia Executive: Humanoid Robots Are the Next Frontier in AI, and They’re Coming Soon

Published

on

The era of humanoid robots is fast approaching, and artificial intelligence (AI) is finally making it possible to program machines for general-purpose tasks, according to Nvidia’s Rev Lebaredian.

Speaking to Euronews Next during the Computex technology fair in Taiwan, Lebaredian, vice president of Omniverse and simulation technology at Nvidia, described robotics as the “next phase” of AI — a development poised to help ease global labour shortages, especially in industrial sectors.

“For decades, robotics has been the stuff of science fiction,” Lebaredian said. “We’ve long been able to build the physical machines, but the programming part has always been the challenge. AI changes that.”

Companies like Tesla have already made headway, with its Optimus robot reportedly able to carry out household chores. But Nvidia believes true progress lies in virtual training. According to Lebaredian, humanoid robots should first learn in simulated environments — both for safety and efficiency.

“AI is data hungry. Large language models can be trained on vast amounts of online data. But robots don’t have that advantage — there isn’t a massive repository of physical-world data,” he said. “So we must simulate it.”

Simulated environments allow developers to feed robots “renewable” data, creating countless experiences without real-world risks. Once a robot performs well in simulation, it can then be deployed in the real world — much like a graduate entering the workforce, who then trains on specific, company-related knowledge.

The first real-world applications for humanoid robots, Lebaredian believes, will be in factories and warehouses, where workforce shortages are most acute. With many countries facing aging populations and a shrinking pool of workers, particularly in physically demanding or hazardous jobs, robots could play a vital role in sustaining productivity.

“Industrial use will come first because the need is real,” he said. “In every country, skilled workers are retiring and not enough young people are replacing them.”

Taiwan has already announced a five-year plan to invest in robotics to combat its own population challenges, highlighting a growing global trend.

Looking ahead, Lebaredian sees potential roles for robots in retail, mining, hazardous environments like nuclear reactors, and even in caregiving roles for the elderly — if public demand aligns.

Despite the excitement, concerns remain over safety and reliability. Lebaredian acknowledged that while AI models like chatbots still make mistakes, robotics offers a more measurable framework.

“Did the robot pick up the object and place it safely? That’s a binary outcome — and one we can test, measure, and improve,” he said. “We’ve built nuclear reactors safely. We can build safe robots, too.”

With AI-driven training, safety testing, and advancing simulation, the integration of humanoid robots into society may be closer than many think.

Continue Reading

Tech

EU Commission Warns TikTok Over DSA Violations, Threatens Multi-Billion Euro Fine

Published

on

The European Commission has issued preliminary findings indicating that TikTok may have violated the Digital Services Act (DSA), potentially exposing the social media platform to a fine of up to 6% of its global annual revenue.

The announcement on Thursday stems from an investigation launched in February 2024, focused on TikTok’s advertising transparency obligations under the DSA — landmark legislation that governs digital services across the EU. The Commission stated that TikTok’s advertising repository fails to meet legal standards required for very large online platforms.

According to the Commission, TikTok has not provided essential details about advertisements running on its platform. Specifically, the repository lacks sufficient information about the content of the ads, the identity of the paying entities, and the demographics or users targeted. Additionally, the repository does not offer robust search capabilities, undermining its utility for researchers and civil society groups aiming to detect disinformation, scams, or influence operations.

These shortcomings limit the public’s ability to scrutinise online advertising and understand how digital platforms influence public discourse,” the Commission said in its statement.

TikTok, owned by Chinese tech firm ByteDance, now has the opportunity to review the Commission’s findings and respond in writing before a final decision is made. If the violations are confirmed, the platform could face a fine amounting to billions of euros.

The DSA, which came into full effect for all online platforms in early 2024, imposes strict transparency, safety, and accountability requirements, particularly for platforms with over 45 million EU users.

This case is one of several ongoing DSA probes. The Commission is continuing to investigate other aspects of TikTok’s operations, including concerns over its algorithmic systems, child safety protocols, age verification measures, and data accessibility for researchers. Another probe launched in December 2024 remains unresolved, focusing on TikTok’s handling of misinformation during Romania’s electoral process.

The EU has also opened investigations into other major tech companies, including X (formerly Twitter), Meta, and AliExpress, though none have concluded.

Thursday’s development underscores the EU’s commitment to enforcing digital rules amid growing concerns over the power and influence of online platforms in democratic societies.

Continue Reading

Trending