Health
Smartphone Use Before Age 13 Linked to Suicidal Thoughts and Poor Mental Health, Global Study Finds
A global study has found that children given smartphones before the age of 13 face significantly higher risks of mental health challenges, including suicidal thoughts, low self-worth, aggression, and detachment from reality.
The research, conducted by the nonprofit Sapien Labs and published in the Journal of the Human Development and Capabilities, analyzed data from 100,000 people aged 18 to 24 across multiple countries. Participants self-reported on 47 aspects of their mental, emotional, social, and physical health to produce overall “mind health” scores.
The results show a striking pattern: the earlier a child received a smartphone, the worse their mental health in early adulthood. Young adults who got their first smartphone at age 13 had mind health scores around 30, but that figure dropped to nearly zero among those who received phones at just five years old.
The study also revealed that girls are particularly vulnerable. Nearly 9.5% of young women were classified as “struggling” with their mental health compared to 7% of young men, regardless of cultural or geographic background.
Key risk factors identified include disrupted sleep, poor emotional regulation, increased exposure to cyberbullying, and weakened family relationships. The findings remained consistent across socioeconomic groups and countries, suggesting a universal link between early smartphone use and deteriorating mental health.
Lead author Dr. Tara Thiagarajan has called for urgent action. “I’d like to see smartphones regulated like alcohol or tobacco,” she said. “This includes age restrictions, limits on social media access, mandatory digital literacy education, and holding tech companies accountable.”
She emphasized that younger children are particularly susceptible because of their still-developing cognitive and emotional capacities. “The strength of these results surprised me at first, but when you think about the fragile state of the developing mind, it begins to make sense,” she added.
In response to growing concerns, several European nations have already imposed classroom smartphone bans. France, Italy, the Netherlands, Luxembourg, and certain Spanish regions enforce full-day bans in schools, while other countries like Denmark, Portugal, and Cyprus are considering similar steps.
The European Union has also introduced legislation aimed at protecting children online. This includes the Digital Services Act, the General Data Protection Regulation, and the Audiovisual Media Services Directive. Most recently, the European Parliament voted to criminalize AI-generated child abuse images and online grooming practices.
As digital devices become increasingly common in children’s lives, researchers and policymakers are sounding the alarm on their long-term psychological impact—and calling for regulation before the effects become irreversible.
Health
Novo Nordisk Teams Up With OpenAI to Accelerate Drug Discovery Using AI
Danish pharmaceutical giant Novo Nordisk has announced a new partnership with OpenAI aimed at integrating artificial intelligence across its drug development and business operations.
The collaboration, revealed on Tuesday, is expected to help the company identify new treatments more quickly and improve how medicines are developed, produced and delivered to patients. Novo Nordisk said the use of advanced AI tools will allow it to analyse vast and complex datasets, uncover patterns that were previously difficult to detect, and shorten the timeline from research to patient access.
Chief executive Mike Doustdar said the agreement marks an important step in positioning the company for the future of healthcare. He noted that millions of people living with chronic conditions such as obesity and diabetes still require better treatment options, adding that new therapies remain to be discovered.
Novo Nordisk is widely known for its leading treatments in these areas, including Ozempic and Wegovy, which have seen strong global demand in recent years. The company said integrating AI into daily workflows will allow its teams to test ideas more rapidly and bring innovations to market at a faster pace.
The partnership will not be limited to research and development. Both companies plan to apply AI tools to manufacturing processes, supply chains and commercial operations, with pilot programmes already set to begin. Full integration is expected by the end of the year.
Sam Altman said artificial intelligence is transforming industries and has the potential to significantly improve outcomes in life sciences. He added that the collaboration would support faster scientific discovery and more efficient global operations, helping to shape the future of patient care.
The move comes as pharmaceutical companies increasingly turn to AI to gain an edge in drug discovery. Novo Nordisk has already invested in innovation through initiatives such as the Danish Centre for AI Innovation, developed in partnership with Nvidia and Denmark’s export and investment fund.
Competition in the sector is intensifying. US-based Eli Lilly, a key rival in the weight-loss drug market, recently announced its own AI-focused collaboration with Insilico Medicine to develop new treatments. The agreement, valued at up to $2.75 billion, highlights the growing role of AI in reshaping pharmaceutical research.
Industry analysts say such partnerships reflect a broader shift toward data-driven innovation in healthcare, where the ability to process and interpret large volumes of information is becoming increasingly important.
For Novo Nordisk, the partnership with OpenAI signals a commitment to staying at the forefront of this transformation, as companies race to harness technology in the search for new and more effective treatments.
Health
Study Finds AI Models Fall Short in Early Medical Diagnosis
A new study has found that artificial intelligence language models still struggle with one of the most critical aspects of medical care, raising concerns about their use without human oversight.
Researchers from Mass General Brigham reported that AI systems failed to produce an appropriate early diagnosis more than 80 per cent of the time. The findings, published in JAMA Network Open, highlight ongoing limitations in how these systems reason through complex clinical scenarios.
The study examined 21 large language models, including systems developed by OpenAI, Google and xAI. Among those tested were versions of GPT, Gemini, Claude, Grok and DeepSeek.
Researchers used a structured evaluation tool known as PrIME-LLM to assess how well the models handled different stages of clinical reasoning. These stages included forming an initial diagnosis, ordering tests, reaching a final diagnosis and planning treatment. The models were tested using 29 standardised clinical scenarios, with information introduced gradually to mirror real-life patient cases.
While the systems showed relatively strong performance when identifying a final diagnosis, their ability to generate a differential diagnosis — a key step in distinguishing between conditions with similar symptoms — remained limited. This early-stage reasoning is widely regarded as essential in medical decision-making.
Marc Succi, a co-author of the study, said current models are not ready for independent clinical use. He noted that differential diagnosis represents a core part of medical practice that AI has yet to replicate effectively.
Another researcher, Arya Rao, said the findings show that AI performs best when given complete information but struggles when cases are still developing. She explained that the models are less reliable in situations where doctors must make judgments based on limited or uncertain data.
Despite these shortcomings, the study identified a group of higher-performing systems, including advanced versions of GPT, Gemini, Claude and Grok. These models achieved final diagnosis success rates ranging from around 60 per cent to over 90 per cent when provided with detailed clinical data such as lab results and imaging.
Experts not involved in the research also stressed the importance of caution. Susana Manso García said the findings reinforce that AI should not replace professional medical judgement. She advised that patients continue to seek guidance from qualified healthcare providers when dealing with health concerns.
The study concludes that while AI has made progress, it still requires close human supervision in clinical settings. Researchers say the technology shows promise as a support tool, but its current limitations mean it cannot yet be trusted to make independent medical decisions.
Health
Genetic Differences May Shape Effectiveness of Popular Weight-Loss Drugs, Study Finds
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
