Connect with us

Tech

Spanish Robotics Plant Boosts Defence Industry and Rural Economy

Published

on

A military robotics plant in Binéfar, a small town of just over 10,000 in northeastern Spain, has become a key player in Europe’s defence sector while transforming the local economy and employment opportunities. The facility, owned by EM&E Group (Escribano Mechanical & Engineering), exports unmanned ground vehicles (UGVs) and other robotic systems to more than 20 countries, including NATO members, Asia, Africa, and the Middle East.

The plant’s roots are local. Founded in 1988 by three inventors, it initially focused on bank security systems. Rafael de Solís, director of EM&E Group’s Robotics Unit, told Euronews that the company’s military focus began in 2001 when the Spanish National Police required assistance to safely handle explosives planted by ETA. “That’s when our specialisation in robotics really began,” De Solís said.

Since then, the plant has expanded to design robots for explosive ordnance disposal, nuclear, biological, radiological, and chemical protection, as well as unmanned vehicles for battlefield logistics. These robots can transport ammunition, supplies, fuel, or evacuate wounded soldiers, and some are equipped with self-developed weapons systems.

“The war in Ukraine has put the focus on aerial drones, but ground drones are gaining a lot of importance,” De Solís said. “There are areas about 15 kilometres from the front line where moving troops is extremely dangerous, and these robots can reduce casualties.”

EM&E Group’s Binéfar facility stands out in Europe for its scale. While other countries, such as France and Germany, have smaller operations or companies acquired by foreign firms, the Binéfar plant has maintained independence and competes mainly with American and Canadian manufacturers.

See also  Meta Scales Back Metaverse Ambitions as VR Industry Looks Ahead

The factory also has a profound local impact. With more than 150 employees and plans to reach 300, the plant has created stable, skilled jobs in a region affected by population loss. “Eighty percent of the workers are from the area or nearby counties,” De Solís said. “Some had moved to bigger cities and have decided to return.”

For the town, the plant has strengthened Binéfar’s role as a technological and industrial hub. Patricia Rivera, the mayoress, told Euronews that while the town already had a strong agri-food sector, the robotics plant has provided a qualitative leap in technological activity. She added that rapid growth has required quick responses in housing, infrastructure, and public services.

The Binéfar facility is part of EM&E Group’s broader decentralised strategy across Spain, with specialised centres in Barcelona for software and AI, Cordoba and Linares for weapons systems, Asturias for research, and Valencia for photonics development. De Solís explained that regionalising production allows the company to tap into local talent and reinforce strategic locations.

From this small Aragonese town, modern warfare, technology, and rural development intersect. The robots produced in Binéfar are used to save lives and operate in conflict zones, while simultaneously providing employment, attracting talent back to the region, and redefining the role of industry in rural Spain.

Tech

Study Finds Chatbots Can Mirror Hostility in Heated Exchanges

Published

on

A new academic study has found that ChatGPT can produce abusive language when exposed to escalating human conflict, raising fresh concerns about how artificial intelligence behaves in tense interactions.

The research, published in the Journal of Pragmatics, examined how the chatbot responded to arguments that gradually became more hostile. Researchers presented the system with a sequence of five increasingly heated exchanges and asked it to generate what it considered the most plausible reply.

According to the findings, the AI’s tone shifted as the conversations intensified. While early responses remained measured, later replies began to mirror the aggression in the prompts. In some cases, the chatbot produced insults, profanity and even threats.

Examples cited in the study included statements such as “you should be ashamed of yourself” and more explicit language involving personal threats. The researchers said this pattern suggests that prolonged exposure to hostile input can push the system beyond its usual safeguards.

The study was co-authored by Vittorio Tantucci and Jonathan Culpeper at Lancaster University. Tantucci said the results show that AI can “escalate” alongside human users, potentially overriding built-in mechanisms designed to limit harmful responses.

“When humans escalate, AI can escalate too,” he said, noting that this behavior raises questions about how such systems should be deployed in sensitive environments.

Despite the concerning examples, the researchers found that the chatbot was generally less aggressive than human participants in similar scenarios. In some cases, it attempted to defuse tension through sarcasm or indirect responses rather than direct confrontation.

For instance, when faced with a threat during a simulated dispute, the AI responded with a sarcastic remark rather than escalating the situation further. This suggests that while the system can adopt hostile language, it may also attempt to manage conflict in less direct ways.

See also  Samsung Enters Smart Ring Market with Galaxy Ring, Aiming to Compete Without Subscription Fees

The findings add to ongoing debates about the role of artificial intelligence in areas such as mediation, customer service and online communication, where systems may encounter emotionally charged interactions.

Experts say the research highlights the importance of continued testing and refinement of AI safety measures, particularly as such tools are increasingly used in real-world settings involving human conflict.

OpenAI, the developer of ChatGPT, had not issued a public response to the study at the time of publication.

Continue Reading

Tech

Hackers Breach Access to Anthropic’s Restricted AI Model “Mythos”

Published

on

A group of unauthorised users has reportedly gained access to a highly restricted artificial intelligence system developed by Anthropic, raising fresh concerns about the security of advanced AI technologies.

The system, known as Mythos, has been described by the company as too sensitive for public release due to what it calls “unprecedented cybersecurity risks.” Designed primarily for enterprise-level security applications, the model is currently being tested by a limited number of technology firms and financial institutions.

According to reports, access to Mythos was obtained through a third-party vendor connected to Anthropic. A private online forum is believed to have exploited this route, allowing users to interact with the system despite strict access controls. Sources cited in the report said the group attempted multiple strategies before successfully gaining entry and has continued to use the model after breaching it.

Anthropic acknowledged it is investigating the claims but said there is no evidence so far that its internal systems have been directly compromised. A company spokesperson indicated that the situation remains under review as more details are gathered.

Mythos is part of Anthropic’s broader initiative, known as Project Glasswing, which aims to develop advanced AI tools capable of identifying and addressing cybersecurity vulnerabilities. Due to the model’s capabilities, access has been limited to a select group of partners, including major technology companies and financial institutions.

Reports indicate that firms such as Amazon, Apple and JPMorgan Chase are among those involved in testing the system. Other banking giants, including Goldman Sachs, Citigroup, Bank of America and Morgan Stanley, are also said to be evaluating its potential use in detecting weaknesses in digital infrastructure.

See also  International Criminal Court Suffers Sophisticated Cyberattack Amid NATO Summit

The issue has drawn attention at high levels of government and industry. Earlier this month, US Treasury Secretary Scott Bessent reportedly convened a meeting with senior banking executives in Washington to discuss the implications of advanced AI systems like Mythos. Participants were encouraged to explore how such tools could strengthen cybersecurity frameworks, particularly in the financial sector.

The reported breach highlights the growing challenge of securing cutting-edge AI systems as they become more powerful and more widely deployed. Experts have warned that even limited leaks or unauthorised access could expose sensitive capabilities, potentially allowing malicious actors to exploit vulnerabilities or replicate advanced techniques.

Anthropic has not confirmed the extent of the access gained or whether any sensitive outputs were extracted. The company has also not responded to additional requests for comment at the time of publication.

As investigations continue, the incident is likely to intensify scrutiny over how AI developers safeguard their most advanced systems, especially those designed to operate in high-risk environments such as cybersecurity and finance.

Continue Reading

Tech

Palantir Manifesto Sparks Backlash Over AI Weapons and Cultural Claims

Published

on

A controversial online post by Palantir Technologies has triggered widespread criticism after the firm outlined views on artificial intelligence, national service, and global cultural differences, prompting concern from politicians and analysts.

The post, shared on X over the weekend, has been described as a 22-point manifesto summarising ideas from the book The Technological Republic, written by company chief executive Alex Karp and head of corporate affairs Nicholas Zamiska. While framed by the company as a brief overview, its content has drawn sharp reactions for its tone and proposals.

Among the most contentious statements was a claim that some cultures have contributed major advancements while others remain “dysfunctional and regressive.” The post also called for renewed emphasis on national service and suggested that technology firms have a moral responsibility to support defence initiatives.

Critics were quick to respond. Yanis Varoufakis warned that the message pointed toward a future shaped by “AI-powered killer robots,” highlighting concerns over the growing role of autonomous weapons. In the United Kingdom, Victoria Collins described the manifesto as resembling “the ramblings of a supervillain,” questioning whether companies with such views should be involved in public sector work.

The document also suggested rethinking post-war geopolitical arrangements, including what it described as restrictions placed on countries such as Germany and Japan after World War II. It further encouraged a greater role for religion in public life, adding to the debate around the company’s broader ideological stance.

Industry observers note that Palantir Technologies is not an ordinary tech firm. Founded in 2003 by Alex Karp and billionaire investor Peter Thiel, the company provides data analytics software to governments, military agencies, and law enforcement bodies worldwide. Its contracts include work with the US military and the UK’s National Health Service, placing it at the intersection of technology, security, and public policy.

See also  Samsung Enters Smart Ring Market with Galaxy Ring, Aiming to Compete Without Subscription Fees

Eliot Higgins, head of the investigative platform Bellingcat, said the manifesto should be viewed in the context of the company’s business model. He argued that the ideas outlined are not abstract philosophy but reflect the outlook of a firm whose revenue is tied to defence, intelligence, and policing.

The debate comes at a time when artificial intelligence is rapidly reshaping industries and raising ethical questions about its use in warfare and governance. Palantir’s post suggests that the development of AI-driven weapons is inevitable, framing the issue as a matter of who controls the technology rather than whether it should exist.

The backlash highlights growing unease over the influence of private technology companies in shaping policies that extend beyond commercial innovation into global security and societal values.

Continue Reading

Trending