Connect with us

Tech

New AI System Helps “Kidnapped” Robots Find Their Way in Changing Environments

Published

on

Researchers in Spain have developed an AI system that allows robots to recover their position even after being moved, powered off, or displaced, offering a solution to the long-standing “kidnapped robot” problem. The system, designed at Miguel Hernández University of Elche, could enable autonomous machines to navigate safely in environments that change over time.

Autonomous robots, used in service operations, logistics, infrastructure inspection, environmental monitoring, and self-driving vehicles, often rely on satellite navigation systems such as GPS. These signals can be unreliable near tall buildings or completely unavailable indoors, making precise localisation a persistent challenge.

The new approach, called MCL-DLF (Monte Carlo Localisation – Deep Local Feature), uses 3D LiDAR technology to scan surroundings with laser pulses, creating a detailed map-like representation of the environment. By analysing both large structures and small distinguishing details, the system helps robots determine their exact location.

“This is similar to how people first recognise a general area and then rely on small distinguishing details to determine their precise location,” said Míriam Máximo, lead author of the study and a researcher at Miguel Hernández University of Elche.

MCL-DLF uses AI to identify which environmental features are most useful for localisation. The system maintains multiple possible location estimates simultaneously and continuously updates them as new sensor data becomes available. This allows robots to maintain reliable positioning even when environments look similar or have changed, such as when vegetation shifts or lighting conditions vary.

The research team tested the system over several months on the university campus under diverse conditions, including different seasons, lighting, and natural changes in vegetation. Results showed that MCL-DLF provided stronger positioning accuracy and more consistent performance compared with conventional localisation methods.

See also  Baltic ‘Drone Wall’ Moves Closer to Reality as Firms Signal Readiness

By enabling robots to navigate without constant reliance on external infrastructure, the system could increase operational independence in real-world environments, where conditions rarely remain static. Reliable localisation is particularly important for tasks where safety and precision are critical, such as autonomous deliveries, environmental monitoring, and industrial inspections.

The development of MCL-DLF represents a significant advance in robotics, providing a practical solution to the kidnapped robot problem. Researchers say the technology could help service and industrial robots operate more effectively in complex, dynamic settings, paving the way for wider adoption of autonomous systems in both indoor and outdoor environments.

With AI-driven localisation, robots may soon be able to recover from displacements quickly and continue tasks without human intervention, making them more resilient and adaptable in everyday operations.

Tech

Report Questions Evidence Behind AI Industry’s Climate Claims

Published

on

A new report by German non-profit Beyond Fossil Fuels has raised concerns about the strength of evidence supporting claims that artificial intelligence can significantly reduce global carbon emissions.

The group reviewed more than 150 climate-related statements made by leading AI companies and organisations, including the International Energy Agency. It found that only 26 per cent of the claims cited published academic research, while 36 per cent did not reference any evidence at all. The remaining claims relied on corporate reports, media coverage, NGO publications or unpublished academic work.

According to the report, many corporate sources lack peer-reviewed data or primary research to substantiate their projections. “The evidence for massive climate benefits of AI is weak, whilst the evidence of substantial harm is strong,” the authors wrote.

Estimates of AI’s environmental footprint vary widely. A January study published in the journal Patterns suggested that data centres alone may have emitted between 32.6 million and 79.7 million tonnes of carbon dioxide in 2025, roughly comparable to the annual emissions of a small European country.

By contrast, the International Energy Agency has argued that AI could cut global emissions by up to 5 per cent by 2035 by accelerating innovation in the energy sector. The agency has pointed to applications such as testing new battery chemistries and materials for solar power as examples of how AI might support cleaner technologies.

Beyond Fossil Fuels examined high-profile industry claims, including a projection cited by Google that AI could reduce global greenhouse gas emissions by 5 to 10 per cent by 2030 if widely adopted. The report traced the estimate back to a 2021 blog post by consulting firm Boston Consulting Group, which based the figure on client experience rather than peer-reviewed global analysis. Researchers described the claim as an extrapolation built on limited evidence.

See also  Hacker Group Accesses Data of Over 200 Million Pornhub Users

The report also reviewed assertions that smaller, narrowly trained AI models are more environmentally efficient. It concluded that there is insufficient peer-reviewed research demonstrating that such systems can deliver measurable emissions reductions at scale.

In addition, the analysis said it found no verified example of generative AI systems such as OpenAI’s ChatGPT, Google’s Gemini or Microsoft’s Copilot producing substantial, measurable emissions cuts. Even if certain efficiencies exist, the report argues that they may be outweighed by the rapid expansion in energy use linked to data centre growth.

The authors said their findings do not suggest AI lacks climate benefits altogether, but they contend there is limited evidence that current applications can offset the sector’s growing energy demands. Requests for comment were sent to major AI firms and the International Energy Agency.

Continue Reading

Tech

Cybersecurity Experts Warn of Risks in AI Caricature Trend

Published

on

The latest AI-generated caricature trend, in which users upload images to chatbots like ChatGPT, could pose serious security risks, cybersecurity experts have warned. Images uploaded to AI chatbots could be retained for an unknown amount of time and, if in the wrong hands, could lead to impersonation, scams, and fake social media accounts.

The trend invites users to submit photos of themselves, sometimes alongside company logos or job details, and ask AI systems to create colorful caricatures based on what the chatbot “knows” about them. While the results can be entertaining, experts caution that sharing these images can reveal far more than participants realise.

“You are doing fraudsters’ work for them — giving them a visual representation of who you are,” said Bob Long, vice-president at age authentication company Daon. He added that the trend’s wording alone raises concerns, suggesting it could have been “intentionally started by a fraudster looking to make the job easy.”

When an image is uploaded, AI systems process it to extract data such as a person’s emotions, surroundings, or potentially location details, according to cybersecurity consultant Jake Moore. This information may then be stored indefinitely. Long said that uploaded images could also be used to train AI image generators as part of their datasets.

The potential consequences of data breaches are significant. Charlotte Wilson, head of enterprise at Israeli cybersecurity firm Check Point, said that if sensitive images fall into the wrong hands, criminals could use them to create realistic AI deepfakes, run scams, or establish fake social media accounts. “Selfies help criminals move from generic scams to personalised, high-conviction impersonation,” she said.

See also  Meta Scales Back Metaverse Ambitions as VR Industry Looks Ahead

OpenAI’s privacy policy states that images may be used to improve the model, including training it. ChatGPT clarified that this does not mean every uploaded photo is stored in a public database, but patterns from user content may be used to refine how the system generates images.

Experts emphasise precautions for those wishing to participate. Wilson advised avoiding images that reveal identifying details. “Crop tightly, keep the background plain, and do not include badges, uniforms, work lanyards, location clues or anything that ties you to an employer or a routine,” she said. She also recommended avoiding personal information in prompts, such as job titles, city, or employer.

Moore suggested reviewing privacy settings before participating. OpenAI allows users to opt out of AI training for uploaded content via a privacy portal, and users can also disable text-based training by turning off the “improve the model for everyone” option. Under EU law, users can request the deletion of personal data, though OpenAI may retain some information to address security, fraud, and abuse concerns.

As AI trends continue to gain popularity, experts caution that even seemingly harmless images can carry significant risks. Proper precautions and awareness are essential for users to protect their personal information while engaging with new AI technologies.

Continue Reading

Tech

European executives warn AI growth is outpacing infrastructure, Nokia survey finds

Published

on

More than 1,000 business and technology leaders across Europe have raised serious concerns about the continent’s readiness to support the rapid expansion of artificial intelligence, according to a new study by Nokia. Executives identified energy supply, network capacity, and secure connectivity as the most pressing challenges that could slow the adoption of AI across industries.

The survey found that AI is already widely used by European companies, with 67% reporting that they have integrated the technology into their operations. Another 15% are running pilot projects, indicating that adoption is expected to grow significantly in the coming years. Many businesses see AI as essential for improving efficiency, automating processes, and strengthening innovation.

Cybersecurity emerged as the leading application area, with 63% of companies using AI to protect systems and data. Automation of business processes followed at 57%, while customer service tools such as chatbots and virtual assistants accounted for 55%. Companies are also using AI for product development, predictive analytics, robotics, and supply chain management.

Despite strong adoption, executives warned that infrastructure is struggling to keep pace with demand. Nokia’s report, titled “AI is too big for the European internet,” highlighted that Europe’s digital backbone is not yet equipped to handle large-scale AI workloads. The report noted that connectivity remains fragmented and security concerns persist, creating obstacles to expansion.

Energy supply was identified as the biggest constraint. About 87% of executives said they were worried that Europe’s energy infrastructure cannot meet rising AI demand. More than half said energy systems are already under strain or at risk. One in five companies reported delays to AI projects due to energy shortages, while others said they had to adjust project timelines or choose different locations because of limited power availability.

See also  Kazakhstan Launches Central Asia’s Most Powerful Supercomputer Amid Push for AI Sovereignty

High electricity costs were also cited as a major concern, with 52% of executives saying Europe’s energy prices are not competitive compared to other regions. Limited grid capacity, slow approval processes, and restricted access to renewable energy sources were also highlighted as barriers.

As a result, 61% of executives said they are considering relocating data-intensive operations to regions with lower energy costs or have already taken steps in that direction. Only 16% said they plan to keep operations in Europe regardless of energy constraints.

Connectivity issues are also affecting companies. More than half reported network performance problems, including delays and downtime linked to increasing data traffic. Around 86% of executives expressed concern about internet reliability as AI usage continues to expand.

The report warned that global data traffic is expected to increase sharply by 2033, placing additional strain on existing networks. Business leaders called for greater investment in energy infrastructure, improved network capacity, and clearer regulations to support Europe’s ability to compete in the global AI race.

Continue Reading

Trending