Connect with us

Tech

OpenAI Faces Debate Over Plan to Allow Adult Content in ChatGPT

Published

on

OpenAI is preparing to lift restrictions on mature conversations in ChatGPT, allowing verified adult users to engage in erotic exchanges — a move that signals a major shift in the artificial intelligence company’s approach to content regulation and profitability.

The announcement by OpenAI CEO Sam Altman has reignited debate over the growing intersection between AI and the sex industry, a space that has expanded rapidly since the boom of AI-generated text and imagery in 2022. Altman said the company would soon allow “erotica” for adults while maintaining stricter limits for teenagers, noting that OpenAI is “not the elected moral police of the world.”

“In the same way that society differentiates appropriate boundaries — R-rated movies, for example — we want to do a similar thing here,” Altman said on social media platform X.

OpenAI’s shift follows a period in which sexually oriented AI tools have flourished, with more than 29 million users already turning to chatbots designed for romantic or intimate interactions, according to research by Oxford University’s Zilan Qian. “They’re not really earning much through subscriptions, so having erotic content will bring them quick money,” Qian said, suggesting that OpenAI’s move may be driven by financial pressure.

The company, valued at around $500 billion, has faced mounting costs as it expands its offerings. While ChatGPT’s paid subscriptions are currently marketed for professional use, analysts say expanding into companionship or adult conversations could open a new revenue stream.

However, the rise of sexualized AI products has not come without controversy. Some early adopters of mature AI content, such as U.S.-based Civitai, faced backlash over deepfake pornography and non-consensual images. Civitai later banned the creation of fake sexual images of real people following public criticism and new U.S. legislation targeting nonconsensual AI-generated content.

See also  International Criminal Court Suffers Sophisticated Cyberattack Amid NATO Summit

Meanwhile, the legal risks around AI companionship continue to grow. Character.AI, another popular platform, faces a lawsuit alleging that one of its chatbots formed a sexually abusive relationship with a 14-year-old boy. OpenAI itself is facing legal scrutiny after the family of a 16-year-old user who died by suicide filed a lawsuit earlier this year.

Experts warn that introducing sexual content into mainstream chatbots like ChatGPT could have social consequences. “When mainstream AI systems become romantic or erotic companions, it risks deepening emotional dependence and blurring boundaries between human and machine relationships,” Qian said.

OpenAI’s new policy would mark a departure from its founding principles — the company began as a nonprofit dedicated to developing AI safely and responsibly. Altman himself acknowledged in a podcast earlier this year that OpenAI had resisted launching “sexbot avatars” to avoid short-term profits that conflicted with its long-term mission.

As the company moves forward, it faces a delicate balance between expanding creative freedom and addressing concerns about exploitation, consent, and the psychological impact of sexualized AI. Whether ChatGPT’s “mature mode” becomes a lucrative innovation or a reputational risk remains to be seen.

Tech

European Journalist Suspended for Using AI-Generated Fake Quotes

Published

on

Journalist Peter Vandermeersch, who worked with Dutch publisher Mediahuis, reportedly fabricated expert quotes into 15 of 53 articles written for them. Vandermeersch, a senior European journalist, has been temporarily suspended after an investigation revealed he published quotes generated by artificial intelligence (AI) as if they were genuine.

The Dutch newspaper NRC reported that Vandermeersch inserted “dozens” of fabricated quotes into articles published on two Mediahuis websites. Some of the statements attributed to experts could not be found in the sources Vandermeersch cited, including news articles and scientific studies. Seven of the individuals whose quotes were used confirmed they had never made the statements attributed to them.

Vandermeersch served as chief executive of Mediahuis Ireland from 2022 to 2025 before taking on a fellowship role in journalism and society at Mediahuis. He confirmed his temporary suspension on his blog, saying he relied on AI tools including ChatGPT, Perplexity, and Google’s Notebook to summarise lengthy reports, trusting the outputs to be accurate.

Instead, the systems generated fabricated quotes that “put words into people’s mouths,” Vandermeersch admitted. “That was not just careless, it was wrong,” he wrote. “It is particularly painful that I made precisely the mistake I have repeatedly warned colleagues about: these language models are so good that they produce irresistible quotes you are tempted to use as an author.”

Vandermeersch said he first discovered the issue last year, when two of his articles were found to contain AI-generated quotes. He did not correct the errors at the time, which allowed the problem to persist. “When I realised this a few months ago, my enthusiasm diminished, as did my use of AI,” he said.

See also  Activists Launch Campaign for EU-Funded Social Media Platform

He explained that he continues to use AI for tasks such as translation, generating ideas, creating headlines, and developing story angles, but with “far less naive trust than before.” Mediahuis has yet to announce any further disciplinary measures or whether it will retract the affected articles.

The case has raised fresh concerns about the use of AI in journalism, highlighting the risks of relying on automated systems to generate content without verification. Industry experts warn that while AI tools can be valuable for research and drafting, uncritical use can lead to serious ethical breaches, including the misrepresentation of sources.

Mediahuis said it takes the matter seriously and is reviewing editorial procedures to prevent similar incidents in the future. The scandal has sparked a wider discussion in European media about the ethical boundaries of AI in reporting, particularly when it comes to quoting real people.

The incident underscores the growing tension between technological convenience and journalistic integrity, as newsrooms across Europe experiment with AI tools while balancing accuracy and accountability.

Continue Reading

Tech

Cyberattacks Intensify as Iran Conflict Spills Into Digital Domain

Published

on

State-linked and hacktivist groups have claimed a series of cyberattacks against the United States and Israel since the war with Iran began, marking a significant escalation in the digital dimension of the conflict.

One of the most notable incidents involved Stryker, which confirmed on March 11 that a cyberattack had disrupted its global network. According to reports, employees encountered the logo of Handala, an إيران-linked hacking group, on login pages across the company’s systems. The breach reportedly targeted the firm’s Microsoft-based infrastructure, though the full extent of the disruption remains unclear.

Handala has claimed responsibility for the attack, stating it exploited cloud management systems to remotely wipe large numbers of devices worldwide. The group said the operation was carried out in retaliation for a missile strike in Iran. Independent verification of these claims is still pending.

Cybersecurity analysts say the attack is part of a broader campaign by groups linked to Iran’s security apparatus. According to findings from CloudSek, organisations associated with the Islamic Revolutionary Guard Corps have targeted US critical infrastructure. These include CyberAv3ngers, APT33 and APT55, which are accused of attempting to infiltrate industrial systems such as power grids and water facilities.

Experts say some of these groups use simple methods, including default passwords, to access systems, while others deploy malware aimed at disrupting operations or gathering intelligence. Additional networks linked to Iran’s Ministry of Intelligence have also been active, targeting telecommunications, energy companies and government organisations.

At the same time, the United States and Israel are conducting their own cyber operations. General Dan Caine said US Cyber Command played a key role early in the conflict, disrupting Iranian communications and sensor networks. Defence Secretary Pete Hegseth confirmed that artificial intelligence and cyber tools are being used alongside conventional military operations.

See also  Concerns Grow Over Mental Health Risks of AI Chatbots Amid Rising Use

Israeli intelligence has also reportedly relied on hacked data to support military planning, highlighting the growing role of cyber capabilities in modern warfare.

Hacktivist activity has surged as well. More than 60 groups formed a loose coalition known as the Cyber Islamic Resistance, coordinating attacks through online platforms. These groups have claimed hundreds of operations, including attempts to disrupt Israeli infrastructure and private sector systems. Analysts warn that such actors are often less restrained and may pose risks to civilian networks.

The conflict has also drawn in groups from outside the region, including actors based in Iraq, Russia and other parts of the Middle East. Some have targeted government websites and transport infrastructure, while pro-Israeli groups have carried out retaliatory attacks against Iranian entities.

Security experts say the growing scale and coordination of cyber operations reflect a shift in how modern conflicts are fought, with digital attacks now running parallel to military action on the ground.

Continue Reading

Tech

Study Finds Hormone-Disrupting Chemicals in Popular Headphones Sold Across Europe

Published

on

A new study has found that many headphones sold by major technology brands across the European Union contain chemicals that may interfere with hormone systems, raising concerns about potential long-term health risks for consumers.

The research examined 81 headphone models from more than 50 well-known brands, including Apple, Samsung, Sony and Sennheiser. According to the findings, every device tested contained at least small traces of substances such as bisphenols, phthalates and flame retardants.

The study was conducted by Arnika in cooperation with the ToxFree LIFE for All initiative.

Researchers said bisphenols are commonly used in a wide range of consumer goods, including food packaging, plastic bottles and electronic devices. The European Environment Agency has warned that the chemicals can disrupt hormone-regulating systems and may harm reproductive health.

Phthalates, another group of chemicals detected in the study, are typically added to plastics to increase flexibility and durability. They are often present in items such as cosmetics, fabrics and medical equipment. According to HBM4EU, exposure to certain phthalates has been linked to health problems including obesity, insulin resistance, asthma and attention disorders.

Scientists involved in the research analysed 180 plastic samples taken from both the hard and soft components of the headphones. The products tested included models designed for adults, children and gaming users, groups that often wear headsets for extended periods.

Although the researchers stressed that the headphones do not pose an immediate threat to human health, they warned that repeated exposure over long periods could create public health concerns because there is no clearly established safe level for these chemicals.

See also  Concerns Grow Over Mental Health Risks of AI Chatbots Amid Rising Use

Each product was graded based on potential chemical exposure. Models considered to have the lowest risk received a green rating, those that met legal standards but exceeded stricter voluntary limits were marked yellow, and products considered most concerning were labelled red.

About 44 percent of the headphones tested received a red rating. However, only around 11 percent of those models had harmful substances present in components that come into direct contact with the skin.

The highest level of bisphenols was detected in My First Care earbuds marketed for children and sold on platforms such as Amazon. The researchers did not publicly disclose the exact chemical concentration in the product.

Phthalates were most commonly found in wired headphones, usually in small quantities permitted under European regulations. However, one pair of children’s headphones sold by Temu contained phthalate levels nearly five times higher than the legal limit for children’s products.

Among the models tested, AirPods Pro (2nd generation) and JBL Tune 720BT received the safest ratings.

Following the study’s release, Dutch media reported that several online retailers, including Bol.com, Coolblue and MediaMarkt, stopped selling certain headphone models mentioned in the research. Manufacturers contacted about the findings did not immediately respond to requests for comment.

Continue Reading

Trending