Connect with us

Tech

OpenAI Faces Debate Over Plan to Allow Adult Content in ChatGPT

Published

on

OpenAI is preparing to lift restrictions on mature conversations in ChatGPT, allowing verified adult users to engage in erotic exchanges — a move that signals a major shift in the artificial intelligence company’s approach to content regulation and profitability.

The announcement by OpenAI CEO Sam Altman has reignited debate over the growing intersection between AI and the sex industry, a space that has expanded rapidly since the boom of AI-generated text and imagery in 2022. Altman said the company would soon allow “erotica” for adults while maintaining stricter limits for teenagers, noting that OpenAI is “not the elected moral police of the world.”

“In the same way that society differentiates appropriate boundaries — R-rated movies, for example — we want to do a similar thing here,” Altman said on social media platform X.

OpenAI’s shift follows a period in which sexually oriented AI tools have flourished, with more than 29 million users already turning to chatbots designed for romantic or intimate interactions, according to research by Oxford University’s Zilan Qian. “They’re not really earning much through subscriptions, so having erotic content will bring them quick money,” Qian said, suggesting that OpenAI’s move may be driven by financial pressure.

The company, valued at around $500 billion, has faced mounting costs as it expands its offerings. While ChatGPT’s paid subscriptions are currently marketed for professional use, analysts say expanding into companionship or adult conversations could open a new revenue stream.

However, the rise of sexualized AI products has not come without controversy. Some early adopters of mature AI content, such as U.S.-based Civitai, faced backlash over deepfake pornography and non-consensual images. Civitai later banned the creation of fake sexual images of real people following public criticism and new U.S. legislation targeting nonconsensual AI-generated content.

See also  Study Finds Polish Outperforms English as the Best Language for Communicating with AI

Meanwhile, the legal risks around AI companionship continue to grow. Character.AI, another popular platform, faces a lawsuit alleging that one of its chatbots formed a sexually abusive relationship with a 14-year-old boy. OpenAI itself is facing legal scrutiny after the family of a 16-year-old user who died by suicide filed a lawsuit earlier this year.

Experts warn that introducing sexual content into mainstream chatbots like ChatGPT could have social consequences. “When mainstream AI systems become romantic or erotic companions, it risks deepening emotional dependence and blurring boundaries between human and machine relationships,” Qian said.

OpenAI’s new policy would mark a departure from its founding principles — the company began as a nonprofit dedicated to developing AI safely and responsibly. Altman himself acknowledged in a podcast earlier this year that OpenAI had resisted launching “sexbot avatars” to avoid short-term profits that conflicted with its long-term mission.

As the company moves forward, it faces a delicate balance between expanding creative freedom and addressing concerns about exploitation, consent, and the psychological impact of sexualized AI. Whether ChatGPT’s “mature mode” becomes a lucrative innovation or a reputational risk remains to be seen.

Tech

Cambridge Index Reveals Global Black Market for Fake Social Media Verifications

Published

on

A new index developed by the University of Cambridge has revealed the scale and affordability of the underground market for fake social media account verifications, raising fresh concerns about online manipulation and digital security. According to researchers, fake verification badges can be purchased for as little as eight cents, enabling the rapid creation of networks that imitate authentic users across major online platforms.

The Cambridge Online Trust and Safety Index (COTSI), launched on Thursday, is described as the first global tool capable of tracking real-time prices for verifying fraudulent accounts. The index monitors more than 500 platforms, including TikTok, Instagram, Amazon, Spotify and Uber. By analysing data from sellers operating across the dark web and black-market channels, the project highlights how accessible and inexpensive these services have become.

Researchers say the low cost of creating fake accounts is contributing to the rise of “bot armies” — large groups of automated or semi-automated profiles designed to mimic genuine human activity. These networks can distort online conversations, amplify misleading content, and promote scams or commercial products. They can also be deployed to influence political messaging, creating an illusion of public support or opposition during major events such as elections or policy debates.

The team behind the index said the findings come at a sensitive time for governments and regulators working to contain misinformation. Many popular platforms have reduced investment in content monitoring during the past two years, while others have introduced programmes that reward users for generating high volumes of engagement. Researchers warn that such incentives may encourage the use of artificially inflated interactions, making fake accounts even more valuable to those seeking influence.

See also  TikTok Launches Crowd-Sourced Fact-Checking Tool ‘Footnotes’ in U.S.

According to Cambridge analysts, the market for fraudulent verification has become highly sophisticated. Sellers offer tiered packages, guaranteeing features such as blue-badge symbols, verified rankings or the appearance of longstanding account history. Prices vary by platform and country, but the index shows that even the most complex packages remain within easy reach for groups attempting to manipulate public debate or carry out coordinated campaigns.

The launch of COTSI marks the first attempt to document these prices on a global scale. By presenting live data on the cost of creating fake identities, researchers hope to give policymakers, technology companies and security agencies a clearer picture of how digital manipulation is evolving. The study’s authors stress that tracking these markets is essential for understanding the risks posed by unauthenticated accounts, particularly during periods of political tension.

The university said the index will be updated regularly and will remain publicly accessible as part of its efforts to strengthen digital transparency worldwide.

Continue Reading

Tech

Experts Question Impact of Australia’s New Social Media Ban for Children Under 16

Published

on

Australia has introduced sweeping restrictions that prevent children under 16 from creating or maintaining accounts on major social media platforms, but experts warn the measures may not significantly change young people’s online behaviour. The restrictions, which took effect on December 10, apply to platforms including Facebook, Instagram, TikTok, Snapchat, YouTube, Twitch, Reddit and X.

Under the new rules, children cannot open accounts, yet they can still access most platforms without logging in—raising questions about how effective the regulations will be in shaping online habits. The eSafety Commissioner says the reforms are intended to shield children from online pressures, addictive design features and content that may harm their health and wellbeing.

Social media companies are required to block underage users through age-assurance tools that rely on facial-age estimation, ID uploads or parental consent. Ahead of the rollout, authorities tested 60 verification systems across 28,500 facial recognition assessments. The results showed that while many tools could distinguish children from adults, accuracy declined among users aged 16 and 17, girls and non-Caucasian users, where estimates could be off by two years or more. Experts say the limitations mean many teenagers may still find ways around the rules.

“How do they know who is 14 or 15 when the kids have all signed up as being 75?” asked Sonia Livingstone, a social psychology professor at the London School of Economics. She warned that misclassifications will be common as platforms attempt to enforce the regulations.

Meta acknowledged the challenge, saying complete accuracy is unlikely without requiring every user to present government ID—something the company argues would raise privacy and security concerns. Users over 16 who lose access by mistake are allowed to appeal.

See also  UN Launches Global Effort to Govern Artificial Intelligence Amid Growing Concerns

Several platforms have criticised the ban, arguing that it removes teenagers from safer, controlled environments. Meta and Google representatives told Australian lawmakers that logged-in teenage accounts already come with protections that limit contact from unknown users, filter sensitive subjects and disable personalised advertising. Experts say these protections are not always effective, citing studies where new YouTube and TikTok accounts quickly received misogynistic or self-harm-related content.

Analysts expect many teenagers to shift to smaller or lesser-regulated platforms. Apps such as Lemon8, Coverstar and Tango have surged into Australia’s top downloads since the start of December. Messaging apps like WhatsApp, Telegram and Signal—exempt from the ban—have also seen a spike in downloads. Livingstone said teenagers will simply “find alternative spaces,” noting that previous bans in other countries pushed young users to new platforms within days.

Researchers caution that gaming platforms such as Discord and Roblox, also outside the scope of the ban, may become new gathering points for young Australians. Studies will be conducted to assess the long-term impact on mental health and whether the restrictions support or complicate parents’ efforts to regulate screen time.

Experts say it may take several years to determine whether the ban delivers meaningful improvements to children’s wellbeing.

Continue Reading

Tech

OECD Warns of Sharp Rise in Cyberbullying Across Europe

Published

on

Cyberbullying among adolescents has increased across every European country included in a new report by the Organisation for Economic Co-operation and Development (OECD), raising concerns among researchers, educators and child-protection advocates. The findings, part of the OECD’s How’s Life for Children in the Digital Age? report, show that online harassment is now affecting young people in all 29 countries and regions surveyed, with wide disparities between nations.

The data, which covers children aged 11, 13 and 15, reveals rates ranging from 7.5 per cent in Spain to 27.1 per cent in Lithuania. The European average stands at 15.5 per cent. Alongside Lithuania, the countries with the highest levels include Latvia, Poland, England, Hungary, Estonia, Ireland, Scotland, Slovenia, Sweden, Wales, Finland and Denmark. Nations such as Portugal, Greece, France, Germany and Italy recorded lower-than-average levels.

Cyberbullying in the study refers to repeated or intentional harassment online, including hostile messages, posts designed to ridicule, or the sharing of unflattering or inappropriate images without consent. The OECD noted that online abuse often involves a power imbalance and is amplified by the reach of digital platforms.

Experts attribute national differences to a combination of technological access, cultural norms and institutional preparedness. James O’Higgins Norman, UNESCO Chair on Bullying and Cyberbullying at Dublin City University, said variations in smartphone use, internet penetration and dominant social media platforms influence how often young people are exposed to harmful interactions. He added that cultural attitudes toward conflict and aggression, as well as the quality of school-based prevention programmes, shape each country’s experience.

See also  International Criminal Court Suffers Sophisticated Cyberattack Amid NATO Summit

Specialists from the European Antibullying Network pointed to digital literacy as a key factor. Countries that teach online safety as part of the school curriculum tend to see better outcomes. They also highlighted broader social and economic inequalities, noting that communities with fewer resources often struggle to support vulnerable children effectively.

The report shows that cyberbullying increased everywhere between the 2017–18 and 2021–22 survey periods. Denmark, Lithuania, Norway, Slovenia, Iceland and the Netherlands recorded jumps of more than five percentage points. The OECD average rose from 12.1 to 15.5 per cent. Researchers say the rise coincided with increased access to smartphones and longer daily screen time among adolescents.

Experts agree that the COVID-19 pandemic accelerated the trend. With schools closed and socialising taking place online, young people spent more time on platforms where conflicts could quickly escalate. Digital environments that offer anonymity and instant communication can weaken empathy and accountability, making hostile behaviour more likely, O’Higgins Norman said. He added that some countries are now reporting signs of stabilisation as in-person schooling has resumed.

Girls are more likely than boys to report being cyberbullied in most countries. Across the OECD sample, the rate is 16.4 per cent for girls and 14.3 per cent for boys. Researchers link this gap to the nature of online interactions, as girls tend to engage more in social-media communication, where relational forms of aggression — such as exclusion or image-based harassment — are more common.

Family structure also plays a significant role. Adolescents living in one-parent households report a cyberbullying rate of 19.8 per cent, compared with 14.1 per cent among those living with two parents. Experts say single parents often face heavier time and financial pressures, reducing their capacity to supervise online activity. Young people in such households may also spend more time online for social connection, increasing exposure to risk.

See also  Study Finds Polish Outperforms English as the Best Language for Communicating with AI

The OECD’s findings add to growing calls for more comprehensive national strategies, stronger digital-literacy education and support structures that reflect the realities of adolescent online life.

Continue Reading

Trending