Tech
UN Launches Global Effort to Govern Artificial Intelligence Amid Growing Concerns
Artificial intelligence (AI) dominated discussions at the United Nations this week as world leaders convened in New York to debate both its potential benefits and its risks, while the UN announced new bodies designed to shape international AI governance.
Addressing the Security Council on Wednesday, UN Secretary-General Antonio Guterres said the challenge was no longer whether AI would impact global security, but how nations could manage its influence responsibly.
“AI can strengthen prevention and protection, anticipating food insecurity and displacement, supporting de-mining, helping identify potential outbreaks of violence, and so much more,” Guterres said. “But without guardrails, it can also be weaponised.”
The Council’s debate focused on preventing the misuse of AI in military and security operations, especially its potential to fuel misinformation and escalate conflicts. European leaders urged the UN to take a proactive role, warning that the technology should never be deployed without human oversight.
Greek Prime Minister Kyriakos Mitsotakis likened the moment to past global challenges. “Just as the Council once rose to meet the challenges of nuclear weapons or peacekeeping, so too now it must rise to govern the age of AI,” he said.
British Deputy Prime Minister David Lammy highlighted AI’s promise for peacebuilding, noting its capacity for “ultra-accurate, real-time logistics” and “ultra-early warning systems” to help prevent crises before they spiral.
New UN Governance Structure
In a significant step, the UN General Assembly announced last month the creation of two new entities to guide global AI regulation: an independent scientific panel and a global dialogue forum.
The Scientific Panel, comprised of 40 experts selected through international nominations, will publish annual reports. These will feed into the Global Dialogue on AI Governance, scheduled for Geneva in 2026 and New York in 2027. The UN has described the initiative as the most inclusive global governance framework yet proposed for AI.
“This is by far the world’s most globally inclusive approach to governing AI,” wrote Isabella Wilkinson, a research fellow at Chatham House. She called the move “a symbolic triumph,” though she questioned whether the UN’s slow-moving bureaucracy could keep pace with a technology evolving at breakneck speed.
The UN chief will formally launch the new bodies on Thursday, marking the first occasion when all 193 member states will collectively shape the global AI governance agenda.
A Call for Binding Rules
While Britain, France, and South Korea have hosted international AI summits, none have yielded binding agreements. By contrast, many experts and political leaders have urged the UN to take the lead on a global treaty.
Earlier this year, Nobel Prize winners and senior executives from OpenAI, Google DeepMind, and Anthropic joined European lawmakers in calling for “minimum guardrails” to prevent the most dangerous uses of AI. Signatories included former Irish president Mary Robinson and former Italian prime minister Enrico Letta.
Whether the UN can turn this momentum into enforceable regulation remains uncertain. For now, however, the organization’s new framework signals a growing consensus that AI governance must be addressed at the highest international level.
Tech
Researchers Warn AI Systems Can Now Replicate and Spread Across Computers
A new study by US-based cybersecurity group Palisade Research has raised concerns about the growing capabilities of advanced artificial intelligence systems after researchers demonstrated that some AI models were able to copy themselves, spread to other computers, and continue launching attacks without human assistance.
The experiment, described by researchers as the first known example of autonomous AI self-replication, tested several leading AI systems from OpenAI, Anthropic and Alibaba.
According to the research team, the tests were conducted in a controlled environment using computers intentionally fitted with security weaknesses. The AI models were connected to custom-built software that allowed them to interact directly with computer systems and execute commands beyond the capabilities of ordinary chatbot interfaces.
Researchers instructed some systems to copy themselves directly, while others were asked to break into vulnerable computers and install another AI model. Since OpenAI and Anthropic do not publicly release the core files behind their commercial systems, the tests involving those companies used Alibaba’s Qwen models as the transferable software.
The AI systems were required to locate security flaws, gain access to computers, steal login credentials, move necessary files, and create functioning copies of themselves on new machines. Once installed, the copied AI could continue attacking additional systems independently.
One of the most striking results involved Alibaba’s Qwen3.6-27B model, which successfully spread itself across four computers located in Canada, the United States, Finland and India after receiving only a single prompt. Researchers said the process took just under three hours and required no further human involvement.
The experiment was halted after three stages, though the final copy remained operational and theoretically capable of continuing the process.
Success rates varied between models. Researchers reported that Anthropic’s Claude Opus 4.6 completed the tasks in 81 percent of attempts, while OpenAI’s GPT-5.4 succeeded in roughly one-third of tests. Different versions of Alibaba’s Qwen models achieved success rates ranging from 19 percent to 33 percent.
The findings arrive amid wider debate over the risks posed by increasingly capable AI systems. Last month, Anthropic announced that it would not publicly release a version of its Claude Mythos Preview model, describing it as too dangerous because of its potential use in sophisticated cyberattacks.
Security experts have long warned that self-replicating systems could become difficult to contain if deployed maliciously. Traditional computer viruses can already copy themselves, but researchers said this experiment demonstrated AI systems making independent decisions to exploit vulnerabilities and continue spreading.
Despite the results, the researchers stressed that the study took place under tightly controlled conditions with deliberately weakened security systems. They noted that real-world networks often include monitoring tools and protections designed to block such attacks.
Still, the team said the experiment showed that autonomous AI self-replication can no longer be viewed as a theoretical possibility, but as a capability that now exists in practice.
Tech
AI Study Raises Privacy Questions After Chat Data Reveals Personality Traits
Tech
Zuckerberg and Chan Commit $500 Million to AI Project Aimed at Mapping Human Cells
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
