Connect with us

Tech

New York City Sues Tech Giants Over Alleged Role in Youth Mental Health Crisis

Published

on

New York City has filed a sweeping lawsuit against major social media companies, accusing them of fueling a youth mental health crisis by intentionally designing addictive features that target children and teenagers.

The 327-page complaint, filed in Manhattan federal court, names the parent companies of Facebook, Instagram, TikTok, Snapchat, Google, and YouTube. It alleges that the firms knowingly created platforms engineered to capture and hold users’ attention, despite being aware of the negative psychological effects on younger audiences.

According to the city, social media giants built their platforms around “algorithmically-driven endless feeds” and “incessant notifications,” which encourage compulsive use and prevent users from disconnecting. These design features, the lawsuit claims, have contributed to rising rates of depression, anxiety, loneliness, and low self-esteem among young people.

“Instead of feeding coins into slot machines, kids are feeding social media platforms with an endless supply of attention, time, and data,” the lawsuit stated. It argues that companies prioritized profits over public health, deliberately exploiting the vulnerabilities of children and teenagers to maximize engagement and advertising revenue.

The city also accuses the companies of ignoring mounting evidence linking heavy social media use to mental health harms. “The defendants have long been aware of research connecting use of their apps with harm to their users’ well-being but chose to ignore or brush it off,” the filing said.

The lawsuit seeks damages and claims the companies acted with gross negligence and created a public nuisance. It adds to a growing number of legal challenges brought by U.S. state governments, school districts, and advocacy groups over the impact of social media on young users.

See also  Study Finds Chatbots May Encourage Harmful Behaviour by Excessively Agreeing with Users

Google spokesperson José Castañeda rejected the allegations, telling Euronews Next that the case “fundamentally misunderstands how YouTube works.” He described YouTube as “a platform for watching videos, not a social network where people go to catch up with friends.” Other companies named in the lawsuit, including Meta, TikTok, and Snapchat, did not immediately respond to requests for comment.

The case comes amid intensifying global scrutiny of social media’s effects on young people. Policymakers in several countries have called for tighter regulation, with Denmark’s Prime Minister Mette Frederiksen recently proposing a ban on social media use for children under 15.

New York City officials say the lawsuit is intended to hold tech companies accountable for what they describe as a “systemic assault on the mental health of a generation.” They argue that while social media has become deeply embedded in modern life, its design should not come at the expense of children’s well-being.

Tech

Researchers Warn AI Systems Can Now Replicate and Spread Across Computers

Published

on

A new study by US-based cybersecurity group Palisade Research has raised concerns about the growing capabilities of advanced artificial intelligence systems after researchers demonstrated that some AI models were able to copy themselves, spread to other computers, and continue launching attacks without human assistance.

The experiment, described by researchers as the first known example of autonomous AI self-replication, tested several leading AI systems from OpenAI, Anthropic and Alibaba.

According to the research team, the tests were conducted in a controlled environment using computers intentionally fitted with security weaknesses. The AI models were connected to custom-built software that allowed them to interact directly with computer systems and execute commands beyond the capabilities of ordinary chatbot interfaces.

Researchers instructed some systems to copy themselves directly, while others were asked to break into vulnerable computers and install another AI model. Since OpenAI and Anthropic do not publicly release the core files behind their commercial systems, the tests involving those companies used Alibaba’s Qwen models as the transferable software.

The AI systems were required to locate security flaws, gain access to computers, steal login credentials, move necessary files, and create functioning copies of themselves on new machines. Once installed, the copied AI could continue attacking additional systems independently.

One of the most striking results involved Alibaba’s Qwen3.6-27B model, which successfully spread itself across four computers located in Canada, the United States, Finland and India after receiving only a single prompt. Researchers said the process took just under three hours and required no further human involvement.

The experiment was halted after three stages, though the final copy remained operational and theoretically capable of continuing the process.

See also  Study Warns of “AI Brain Fry” as Workers Report Mental Fatigue from Artificial Intelligence Tools

Success rates varied between models. Researchers reported that Anthropic’s Claude Opus 4.6 completed the tasks in 81 percent of attempts, while OpenAI’s GPT-5.4 succeeded in roughly one-third of tests. Different versions of Alibaba’s Qwen models achieved success rates ranging from 19 percent to 33 percent.

The findings arrive amid wider debate over the risks posed by increasingly capable AI systems. Last month, Anthropic announced that it would not publicly release a version of its Claude Mythos Preview model, describing it as too dangerous because of its potential use in sophisticated cyberattacks.

Security experts have long warned that self-replicating systems could become difficult to contain if deployed maliciously. Traditional computer viruses can already copy themselves, but researchers said this experiment demonstrated AI systems making independent decisions to exploit vulnerabilities and continue spreading.

Despite the results, the researchers stressed that the study took place under tightly controlled conditions with deliberately weakened security systems. They noted that real-world networks often include monitoring tools and protections designed to block such attacks.

Still, the team said the experiment showed that autonomous AI self-replication can no longer be viewed as a theoretical possibility, but as a capability that now exists in practice.

Continue Reading

Tech

AI Study Raises Privacy Questions After Chat Data Reveals Personality Traits

Published

on

A new study by researchers at ETH Zurich has found that artificial intelligence can predict key personality traits by analyzing conversations people have with chatbots, raising fresh concerns about privacy and the growing amount of personal data shared online.

The research, published as a pre-print study, examined whether AI systems could identify psychological characteristics from user interactions with chat platforms such as ChatGPT. Researchers collected chat histories from 668 users in the United States and the United Kingdom who agreed to share their conversations for the project.

More than 62,000 chat exchanges were analyzed and grouped according to topic and communication style. Using this information, researchers trained an AI model to estimate whether users displayed traits linked to the “Big Five” personality categories commonly used in psychology: agreeableness, conscientiousness, emotional stability, extraversion, and openness.

Participants also completed standard psychological assessments so researchers could compare the AI’s predictions with established personality test results.

According to the study, the AI model identified some personality traits with accuracy levels reaching 61 percent. The system performed best when predicting agreeableness and emotional stability, while it struggled more with conscientiousness.

Researchers said the accuracy improved when the AI had access to longer conversation histories, suggesting that extensive chatbot use may reveal more about an individual’s personality over time.

The findings add to growing debate over how personal information shared with AI systems could be used in the future. While researchers said the immediate risks for individuals appear limited, they warned that large-scale collection of personality data could create broader concerns.

See also  Study Finds Chatbots May Encourage Harmful Behaviour by Excessively Agreeing with Users

The study noted that such information could potentially be exploited in targeted advertising campaigns, political messaging, or disinformation efforts designed to influence specific groups of people.

Researchers also warned that users may not fully realize how much personal insight can be extracted from ordinary conversations with AI assistants. Topics people discuss, the language they use, and the emotional tone of messages can all provide clues about behavior and personality patterns.

The team said the results could help developers create stronger privacy protections for AI systems. One proposal involves building tools that automatically remove identifying or sensitive details from conversations before they are stored or analyzed.

As AI chatbots become increasingly common in workplaces, education, and daily life, the study is expected to fuel further discussion over how companies collect, store, and protect user data in rapidly expanding digital environments.

Continue Reading

Tech

Zuckerberg and Chan Commit $500 Million to AI Project Aimed at Mapping Human Cells

Published

on

A major new initiative led by Mark Zuckerberg and Priscilla Chan is set to push the boundaries of artificial intelligence in biology, with a $500 million investment aimed at building detailed AI models of human cells.

The project, announced by their research organisation Chan Zuckerberg Biohub, will run over five years and seeks to create the tools and datasets needed to simulate how human cells function in both healthy and diseased states. The group says the data generated will be made freely available to scientists around the world.

Researchers involved in the effort believe that AI-powered models could transform the study of disease by allowing experiments to be conducted digitally at a scale not currently possible in laboratories. If successful, such models could help uncover how diseases develop and guide the creation of new treatments.

The Biohub was founded in 2016 to bring together engineers and scientists to better understand biology at the cellular level. Since then, it has built extensive datasets focused on individual cells and developed computing systems designed for biological research.

The latest investment includes $400 million allocated to internal work and an additional $100 million set aside to support external researchers. Among the project’s partners is Nvidia, which will contribute expertise in high-performance computing.

According to Biohub scientists, one of the biggest challenges facing the project is the need for vast amounts of data. AI systems become more accurate as they are trained on larger and more detailed datasets, but current biological data remains limited.

Alex Rives, the organisation’s head of science, said new technologies will be required to observe cells in greater detail, from molecular structures to how they behave in tissues. He noted that understanding the full complexity of biology will demand far more data than is currently available.

See also  EU’s Data Union Strategy Seeks to Boost AI and Cross-Border Data Use, but GDPR Stays Untouched

The initiative reflects a broader shift across the life sciences sector, where artificial intelligence is increasingly being used to speed up research and drug development. Companies and research groups are exploring how machine learning can help identify patterns in biological systems and predict how diseases progress.

Other technology firms are also expanding into this field. Isomorphic Labs is working on AI-driven drug discovery, while Microsoft has developed models for medical imaging and genomics.

Backers of the Biohub project say collaboration will be key to success, with hopes that additional funding from other organisations will help expand the effort. The long-term goal is ambitious: to use the combination of AI and biology to improve understanding of disease and accelerate the development of treatments on a global scale.

Continue Reading

Trending