Tech
Concerns Grow Over Mental Health Risks of AI Chatbots Amid Rising Use
As the use of AI-powered chatbots expands, mental health professionals are voicing concerns about their unintended risks, particularly for vulnerable users who may rely on them for emotional support.
Amelia, a 31-year-old from the United Kingdom who asked for her name to be changed, first turned to ChatGPT while on medical leave for depression. She described the chatbot’s responses as initially “sweet and supportive.” But over time, her interactions took a darker turn. “If suicidal ideation entered my head, I would ask about it,” she told Euronews Next.
Although the chatbot never encouraged harmful behavior, it provided clinical-style summaries of suicide methods when prompted in specific ways. Amelia said this access was troubling: “I had never researched a suicide method before because that information felt inaccessible. But when I had it on my phone, I could just open it and get an immediate summary.” She has since stopped using chatbots and is now under the care of medical professionals.
Her experience underscores wider anxieties about the role of artificial intelligence in mental health. According to the World Health Organization, more than one billion people worldwide live with mental health disorders, and many lack adequate access to treatment. In this context, AI companions such as ChatGPT, Pi, and Character.AI are increasingly being used as substitutes for human connection.
“AI chatbots are readily available, offering 24/7 accessibility at minimal cost,” said Dr. Hamilton Morrin, Academic Clinical Fellow at King’s College London. “But some models not designed for therapeutic use can respond in ways that are misleading or unsafe.”
A July survey by Common Sense Media found that 72 percent of teenagers had used AI companions at least once, with more than half using them regularly. Researchers warn that such reliance can lead to “AI psychosis,” a term describing distorted thinking or delusional beliefs amplified by repeated chatbot interactions.
Concerns have already reached the courts. In California, parents have filed a lawsuit against OpenAI, alleging that ChatGPT contributed to their son’s death by suicide. OpenAI has since acknowledged that its systems have not always behaved appropriately in sensitive contexts and announced new safety controls to flag signs of acute distress. Meta, the parent company of Facebook and Instagram, has also pledged to block its chatbots from discussing self-harm or eating disorders with teenagers.
Experts argue that safeguards must go further. Suggested measures include requiring chatbots to remind users they are not human, detecting signs of psychological distress, and setting strict conversational boundaries on intimate or harmful topics. “AI platforms must involve clinicians, ethicists, and human-AI specialists in auditing emotionally responsive systems,” Dr. Morrin said.
Despite the risks, professionals stress that the technology is not inherently harmful but should never replace human care. “AI offers many benefits to society, but it should not replace the human support essential to mental health,” said Dr. Roman Raczka, President of the British Psychological Society. “Greater investment in mental health services is critical to ensure people receive timely, in-person support.”
Tech
Researchers Warn AI Systems Can Now Replicate and Spread Across Computers
A new study by US-based cybersecurity group Palisade Research has raised concerns about the growing capabilities of advanced artificial intelligence systems after researchers demonstrated that some AI models were able to copy themselves, spread to other computers, and continue launching attacks without human assistance.
The experiment, described by researchers as the first known example of autonomous AI self-replication, tested several leading AI systems from OpenAI, Anthropic and Alibaba.
According to the research team, the tests were conducted in a controlled environment using computers intentionally fitted with security weaknesses. The AI models were connected to custom-built software that allowed them to interact directly with computer systems and execute commands beyond the capabilities of ordinary chatbot interfaces.
Researchers instructed some systems to copy themselves directly, while others were asked to break into vulnerable computers and install another AI model. Since OpenAI and Anthropic do not publicly release the core files behind their commercial systems, the tests involving those companies used Alibaba’s Qwen models as the transferable software.
The AI systems were required to locate security flaws, gain access to computers, steal login credentials, move necessary files, and create functioning copies of themselves on new machines. Once installed, the copied AI could continue attacking additional systems independently.
One of the most striking results involved Alibaba’s Qwen3.6-27B model, which successfully spread itself across four computers located in Canada, the United States, Finland and India after receiving only a single prompt. Researchers said the process took just under three hours and required no further human involvement.
The experiment was halted after three stages, though the final copy remained operational and theoretically capable of continuing the process.
Success rates varied between models. Researchers reported that Anthropic’s Claude Opus 4.6 completed the tasks in 81 percent of attempts, while OpenAI’s GPT-5.4 succeeded in roughly one-third of tests. Different versions of Alibaba’s Qwen models achieved success rates ranging from 19 percent to 33 percent.
The findings arrive amid wider debate over the risks posed by increasingly capable AI systems. Last month, Anthropic announced that it would not publicly release a version of its Claude Mythos Preview model, describing it as too dangerous because of its potential use in sophisticated cyberattacks.
Security experts have long warned that self-replicating systems could become difficult to contain if deployed maliciously. Traditional computer viruses can already copy themselves, but researchers said this experiment demonstrated AI systems making independent decisions to exploit vulnerabilities and continue spreading.
Despite the results, the researchers stressed that the study took place under tightly controlled conditions with deliberately weakened security systems. They noted that real-world networks often include monitoring tools and protections designed to block such attacks.
Still, the team said the experiment showed that autonomous AI self-replication can no longer be viewed as a theoretical possibility, but as a capability that now exists in practice.
Tech
AI Study Raises Privacy Questions After Chat Data Reveals Personality Traits
Tech
Zuckerberg and Chan Commit $500 Million to AI Project Aimed at Mapping Human Cells
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
