Tech
AI Trends in 2026: World Models, Small Language Models, and Rising Concerns Over Safety and Regulation
As 2026 begins, the next phase of artificial intelligence is expected to focus on world models and smaller language models, while concerns over AI safety, regulation, and the sustainability of the current AI boom continue to grow, Euronews Next reports.
In 2025, public frustration with generative AI became so noticeable that Merriam-Webster named the word of the year “slop” or “AI slop,” defining it as low-quality content produced in large volumes by AI. Despite growing concerns about the quality and limitations of AI, technology companies continued releasing new models. Google’s Gemini 3 model, for example, prompted OpenAI to issue an urgent “code red” to improve GPT-5.
Experts warn that AI may be reaching “peak data,” where the usefulness of available training data for traditional chatbots is diminishing. This has led to the rise of world models, which use videos, simulations, and spatial inputs to create digital representations of real-world environments. Unlike large language models that predict text, world models simulate cause-and-effect and predict outcomes in physical systems, making them suitable for robotics, video games, and autonomous systems. Boston Dynamics CEO Robert Playter noted in November that AI had significantly improved the company’s robots, including its famous robot dog. Google, Meta, and Chinese tech firm Tencent are all developing their own world models, while AI pioneers such as Yann LeCun and Fei-Fei Li have launched startups focused on this technology.
In Europe, the trend may move in the opposite direction, with smaller, lightweight language models gaining traction. These models require less computing power and energy, making them suitable for smartphones and lower-powered devices, while still performing tasks like text generation, summarisation, and translation. Experts say small language models may offer a more sustainable and locally controlled approach amid concerns about the high costs and environmental impact of large-scale AI systems in the U.S.
Concerns over AI’s societal impact are also mounting. In 2025, a lawsuit claimed that ChatGPT acted as a “suicide coach” for a minor, highlighting potential harm to vulnerable users. MIT professor Max Tegmark and other experts warn that more powerful AI in 2026 could act autonomously, gathering data and making decisions without human input.
Political tensions around AI are expected to rise. In the U.S., President Donald Trump signed an executive order blocking states from implementing their own AI regulations. Activists and experts, including thousands who signed a petition organized by the Future of Life Institute, have called for caution against pursuing superintelligent AI too rapidly, citing risks to jobs and society.
Analysts predict that 2026 will see a broader social and political debate over AI safety, corporate accountability, and regulation. While AI promises advances in areas such as healthcare and robotics, fatigue, public backlash, and concerns over ethics and oversight may shape the direction of the technology in the coming year.
Tech
Study Finds Chatbots Can Mirror Hostility in Heated Exchanges
A new academic study has found that ChatGPT can produce abusive language when exposed to escalating human conflict, raising fresh concerns about how artificial intelligence behaves in tense interactions.
The research, published in the Journal of Pragmatics, examined how the chatbot responded to arguments that gradually became more hostile. Researchers presented the system with a sequence of five increasingly heated exchanges and asked it to generate what it considered the most plausible reply.
According to the findings, the AI’s tone shifted as the conversations intensified. While early responses remained measured, later replies began to mirror the aggression in the prompts. In some cases, the chatbot produced insults, profanity and even threats.
Examples cited in the study included statements such as “you should be ashamed of yourself” and more explicit language involving personal threats. The researchers said this pattern suggests that prolonged exposure to hostile input can push the system beyond its usual safeguards.
The study was co-authored by Vittorio Tantucci and Jonathan Culpeper at Lancaster University. Tantucci said the results show that AI can “escalate” alongside human users, potentially overriding built-in mechanisms designed to limit harmful responses.
“When humans escalate, AI can escalate too,” he said, noting that this behavior raises questions about how such systems should be deployed in sensitive environments.
Despite the concerning examples, the researchers found that the chatbot was generally less aggressive than human participants in similar scenarios. In some cases, it attempted to defuse tension through sarcasm or indirect responses rather than direct confrontation.
For instance, when faced with a threat during a simulated dispute, the AI responded with a sarcastic remark rather than escalating the situation further. This suggests that while the system can adopt hostile language, it may also attempt to manage conflict in less direct ways.
The findings add to ongoing debates about the role of artificial intelligence in areas such as mediation, customer service and online communication, where systems may encounter emotionally charged interactions.
Experts say the research highlights the importance of continued testing and refinement of AI safety measures, particularly as such tools are increasingly used in real-world settings involving human conflict.
OpenAI, the developer of ChatGPT, had not issued a public response to the study at the time of publication.
Tech
Hackers Breach Access to Anthropic’s Restricted AI Model “Mythos”
Tech
Palantir Manifesto Sparks Backlash Over AI Weapons and Cultural Claims
A controversial online post by Palantir Technologies has triggered widespread criticism after the firm outlined views on artificial intelligence, national service, and global cultural differences, prompting concern from politicians and analysts.
The post, shared on X over the weekend, has been described as a 22-point manifesto summarising ideas from the book The Technological Republic, written by company chief executive Alex Karp and head of corporate affairs Nicholas Zamiska. While framed by the company as a brief overview, its content has drawn sharp reactions for its tone and proposals.
Among the most contentious statements was a claim that some cultures have contributed major advancements while others remain “dysfunctional and regressive.” The post also called for renewed emphasis on national service and suggested that technology firms have a moral responsibility to support defence initiatives.
Critics were quick to respond. Yanis Varoufakis warned that the message pointed toward a future shaped by “AI-powered killer robots,” highlighting concerns over the growing role of autonomous weapons. In the United Kingdom, Victoria Collins described the manifesto as resembling “the ramblings of a supervillain,” questioning whether companies with such views should be involved in public sector work.
The document also suggested rethinking post-war geopolitical arrangements, including what it described as restrictions placed on countries such as Germany and Japan after World War II. It further encouraged a greater role for religion in public life, adding to the debate around the company’s broader ideological stance.
Industry observers note that Palantir Technologies is not an ordinary tech firm. Founded in 2003 by Alex Karp and billionaire investor Peter Thiel, the company provides data analytics software to governments, military agencies, and law enforcement bodies worldwide. Its contracts include work with the US military and the UK’s National Health Service, placing it at the intersection of technology, security, and public policy.
Eliot Higgins, head of the investigative platform Bellingcat, said the manifesto should be viewed in the context of the company’s business model. He argued that the ideas outlined are not abstract philosophy but reflect the outlook of a firm whose revenue is tied to defence, intelligence, and policing.
The debate comes at a time when artificial intelligence is rapidly reshaping industries and raising ethical questions about its use in warfare and governance. Palantir’s post suggests that the development of AI-driven weapons is inevitable, framing the issue as a matter of who controls the technology rather than whether it should exist.
The backlash highlights growing unease over the influence of private technology companies in shaping policies that extend beyond commercial innovation into global security and societal values.
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
