Tech
Chile Launches Latam-GPT to Bring Latin America Its Own AI Model
Chile has unveiled Latam-GPT, a Chilean-driven artificial intelligence (AI) project aimed at providing Latin America with a model trained on regional data, reducing bias, and giving the region a stronger presence in the global AI sector. The initiative, promoted by the National Centre for Artificial Intelligence (Cenia), received support from universities, foundations, libraries, government agencies, and civil society organisations across Chile, Uruguay, Brazil, Colombia, Mexico, Peru, Ecuador, and Argentina.
During a presentation on Televisión Nacional this week, Chilean President Gabriel Boric said Latam-GPT positions Latin America as an active participant in the global technology economy. “The region cannot simply be a passive user of AI systems developed elsewhere,” Boric said, noting that reliance on foreign models risks overlooking Latin America’s cultural heritage and traditions. Chilean Minister of Science Aldo Valle added that the project aims to break down prejudices and prevent the representation of Latin America from appearing homogeneous on the global stage.
Despite its name, Latam-GPT is not an interactive chat system. It is a large-scale database trained on information from the region and intended to serve as a foundation for developing technological applications tailored to local needs.
The project comes at a time when AI development remains concentrated in the United States, China, and Europe. Similar regional initiatives, such as SEA-LION in Southeast Asia and UlizaLlama in Africa, are also emerging to focus on local cultural contexts. To create Latam-GPT, developers collected over eight terabytes of regional data—equivalent to millions of books. The first version of the system has been hosted on Amazon Web Services, with plans to train it on a supercomputer to be installed at the University of Tarapacá in northern Chile during the first half of 2026. The investment for the supercomputer is expected to approach five million dollars, while initial funding of $550,000 came mainly from the Development Bank of Latin America (CAF) and contributions from partner institutions.
Alvaro Soto, director of Cenia, highlighted that most global AI models include only a small proportion of Latin American data. President Boric illustrated this by comparing the extensive information available on the siege of Calais with the limited coverage of key battles in Chilean independence, such as the siege of Chillán. Currently, the model’s content is mainly in Spanish and Portuguese, with plans to incorporate indigenous languages in the future.
Latam-GPT will be freely accessible and could support a range of local applications. Soto cited examples such as digital tools for hospitals to manage logistics and medical resources. One of the first companies to use the platform, Chile’s Digevo, plans to develop conversational AI for customer service in airlines and retail. Roberto Musso, Digevo’s director, said the system can understand local slang, idioms, and speech patterns, reducing bias present in global AI models.
Academic Alejandro Barros of the University of Chile cautioned that Latam-GPT may not compete with large international AI models due to differences in infrastructure and funding, but he acknowledged its potential to serve local needs and represent Latin America more accurately on the global stage.
Tech
Researchers Warn AI Systems Can Now Replicate and Spread Across Computers
A new study by US-based cybersecurity group Palisade Research has raised concerns about the growing capabilities of advanced artificial intelligence systems after researchers demonstrated that some AI models were able to copy themselves, spread to other computers, and continue launching attacks without human assistance.
The experiment, described by researchers as the first known example of autonomous AI self-replication, tested several leading AI systems from OpenAI, Anthropic and Alibaba.
According to the research team, the tests were conducted in a controlled environment using computers intentionally fitted with security weaknesses. The AI models were connected to custom-built software that allowed them to interact directly with computer systems and execute commands beyond the capabilities of ordinary chatbot interfaces.
Researchers instructed some systems to copy themselves directly, while others were asked to break into vulnerable computers and install another AI model. Since OpenAI and Anthropic do not publicly release the core files behind their commercial systems, the tests involving those companies used Alibaba’s Qwen models as the transferable software.
The AI systems were required to locate security flaws, gain access to computers, steal login credentials, move necessary files, and create functioning copies of themselves on new machines. Once installed, the copied AI could continue attacking additional systems independently.
One of the most striking results involved Alibaba’s Qwen3.6-27B model, which successfully spread itself across four computers located in Canada, the United States, Finland and India after receiving only a single prompt. Researchers said the process took just under three hours and required no further human involvement.
The experiment was halted after three stages, though the final copy remained operational and theoretically capable of continuing the process.
Success rates varied between models. Researchers reported that Anthropic’s Claude Opus 4.6 completed the tasks in 81 percent of attempts, while OpenAI’s GPT-5.4 succeeded in roughly one-third of tests. Different versions of Alibaba’s Qwen models achieved success rates ranging from 19 percent to 33 percent.
The findings arrive amid wider debate over the risks posed by increasingly capable AI systems. Last month, Anthropic announced that it would not publicly release a version of its Claude Mythos Preview model, describing it as too dangerous because of its potential use in sophisticated cyberattacks.
Security experts have long warned that self-replicating systems could become difficult to contain if deployed maliciously. Traditional computer viruses can already copy themselves, but researchers said this experiment demonstrated AI systems making independent decisions to exploit vulnerabilities and continue spreading.
Despite the results, the researchers stressed that the study took place under tightly controlled conditions with deliberately weakened security systems. They noted that real-world networks often include monitoring tools and protections designed to block such attacks.
Still, the team said the experiment showed that autonomous AI self-replication can no longer be viewed as a theoretical possibility, but as a capability that now exists in practice.
Tech
AI Study Raises Privacy Questions After Chat Data Reveals Personality Traits
Tech
Zuckerberg and Chan Commit $500 Million to AI Project Aimed at Mapping Human Cells
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
