Connect with us

Tech

Hacker Group Accesses Data of Over 200 Million Pornhub Users

Published

on

Data from more than 200 million Pornhub users has reportedly been accessed by a hacker group, raising concerns over online privacy. The Canadian-owned adult website confirmed that an “unauthorised party gained unauthorised access to analytics data” affecting some Premium users.

According to independent technology news site BleepingComputer, the data was accessed by ShinyHunters, a Western-based hacking collective. The information reportedly includes users’ viewing habits, search history, and location data.

Pornhub clarified that the incident involved a “third-party analytics service provider” and affected only a subset of its Premium users. The company said its internal systems had not been breached and confirmed that passwords, payment information, and other financial details remained secure.

The hacking group reportedly made an extortion demand in connection with the incident. While the company acknowledged the threat, it noted that the analytics data in question came from Mixpanel, a service it had stopped using in 2021. This suggests the data may be several years old.

Pornhub stated that it had immediately launched an internal investigation and was working closely with authorities and Mixpanel to understand the nature and scope of the incident. “We are working diligently to determine the nature and scope of the reported incident,” the company said in a statement.

With more than 100 million daily visits worldwide, Pornhub is one of the largest adult websites globally. While the breach reportedly did not include sensitive financial information, the exposure of viewing and search history raises privacy concerns for millions of users.

Cybersecurity experts have warned that such incidents highlight the risks of storing personal and behavioural data with third-party service providers. Even if direct systems remain secure, historical analytics data can still be vulnerable to hacking and exploitation.

See also  Trump Says Nvidia’s Most Advanced AI Chips Will Be Reserved for U.S. Companies

The incident also underscores the broader challenges faced by digital platforms in protecting user information. While Pornhub has implemented safeguards for its Premium systems, this breach demonstrates that legacy data can remain a potential target long after services are discontinued.

Authorities and cybersecurity professionals are continuing to investigate the breach, and users are advised to remain cautious, monitor accounts for unusual activity, and follow best practices for online security.

The event comes amid growing scrutiny of adult websites and their handling of user data. While Pornhub has faced past criticism over content moderation and security, the company’s assurance that financial and login information remains protected offers some reassurance to affected users.

As the investigation continues, experts emphasize that users’ historical data, even from older analytics tools, can be vulnerable to exploitation, highlighting the importance of robust data security practices across all online platforms.

Tech

Militant Groups Adopt AI to Spread Propaganda and Boost Recruitment

Published

on

Extremist organisations have begun using artificial intelligence (AI) to create realistic images, videos, and audio in efforts to recruit members and amplify their influence, national security experts warn. Since programs such as ChatGPT became widely accessible, militant groups have increasingly experimented with generative AI, despite being unsure how to fully exploit its potential.

Recent reports show that individuals linked to the Islamic State (IS) have encouraged supporters to integrate AI into their operations. One post on a pro-IS forum urged users to make “AI part of their operations,” noting its ease of use and potential to cause concern among intelligence agencies.

IS, which once controlled territory in Iraq and Syria, is now a decentralized network of groups and individuals sharing a violent ideology. The organisation recognized years ago that social media could be a powerful recruitment and propaganda tool, making AI a natural extension of its digital tactics. Even poorly resourced groups or individual actors can now use AI to produce deepfakes and other fabricated content at scale, widening their reach and impact.

“For any adversary, AI really makes it much easier to do things,” said John Laliberte, former NSA vulnerability researcher and CEO of cybersecurity firm ClearVector. “With AI, even a small group that doesn’t have a lot of money is still able to make an impact.”

Militant groups have already used AI-generated content to influence public perception. Two years ago, during the Israel-Hamas conflict, fabricated images showing bloodied children in bombed-out buildings circulated widely online, stirring outrage and polarising audiences. Last year, following an IS-affiliated attack at a Russian concert that killed nearly 140 people, AI-crafted propaganda videos spread rapidly on social media and discussion boards. IS has also produced deepfake audio of leaders reciting scripture and quickly translated messages into multiple languages.

See also  OECD Warns of Sharp Rise in Cyberbullying Across Europe

Experts caution that, while extremist groups are still behind nations like China, Russia, or Iran in sophisticated AI applications, their use of the technology is considered “aspirational” but dangerous. Hackers are already using synthetic media for phishing attacks, and AI can also help write malicious code or automate parts of cyberattacks. Homeland security agencies warn that militants could one day use AI to compensate for technical limitations in producing biological or chemical weapons.

Lawmakers are seeking to address the growing threat. Senator Mark Warner of Virginia stressed the need for AI developers to share information on misuse by extremists, hackers, or foreign spies. House legislation now requires homeland security officials to assess the risks AI poses to terrorist groups annually. Representative August Pfluger, who sponsored the bill, said policies must evolve to counter emerging threats.

Marcus Fowler, former CIA agent and CEO of Darktrace Federal, highlighted the urgency: “ISIS got on Twitter early and found ways to use social media to their advantage. They are always looking for the next thing to add to their arsenal.”

As AI becomes increasingly powerful and accessible, security experts warn that militant groups’ ability to manipulate the technology for recruitment, propaganda, and cyber operations is a threat that governments and tech companies cannot ignore.

Continue Reading

Tech

Report reveals AI-generated videos of children circulating on TikTok, linked to illegal content on Telegram

Published

on

A recent investigation has found that AI-generated videos showing young girls in sexualised clothing or suggestive poses have gained widespread attention on TikTok, raising serious concerns about child exploitation online. The Spanish fact-checking organisation Maldita analysed over 5,200 videos across more than 20 accounts, which collectively have more than 550,000 followers and nearly six million likes. Many videos featured girls in bikinis, school uniforms, or tight clothing.

Maldita’s analysis also revealed that comments on these videos contained links to external platforms, including Telegram communities that sell child pornography. The organisation reported 12 such groups to Spanish authorities. The TikTok accounts involved were generating revenue through the platform’s subscription model, which pays creators monthly fees for access to their content. TikTok receives about half of the profits under this arrangement.

The report comes amid global efforts to protect minors online. Countries including Australia, Denmark, and the European Union are introducing or considering restrictions for users under 16, with the goal of curbing exposure to harmful content. TikTok’s own policies require creators to label AI-generated content and allow the removal of content considered harmful to individuals. Despite this, Maldita found that most of the videos it examined did not include any AI identifiers or watermarks. Some content, however, displayed the platform’s “TikTok AI Alive” watermark, which is automatically applied when still images are converted into videos.

In response to the findings, both Telegram and TikTok emphasised their commitment to preventing child sexual abuse material. Telegram stated that it scans all media on its public platform against previously removed content to prevent its spread. In 2025 alone, the platform removed over 909,000 groups and channels containing child sexual abuse material.

See also  Kazakhstan Launches Central Asia’s Most Powerful Supercomputer Amid Push for AI Sovereignty

TikTok said 99 percent of content harmful to minors is removed automatically, with another 97 percent of AI-generated offending content being proactively taken down. The platform said it immediately suppresses or closes accounts that share sexually explicit content involving children and reports them to the United States’ National Center for Missing and Exploited Children (NCMEC). TikTok also told CNN that between April and June 2025, it removed more than 189 million videos and banned over 108 million accounts.

Maldita’s report highlights the challenges social media platforms face in policing AI-generated content and preventing the exploitation of children. Experts warn that while automated tools and moderation can reduce the spread of illegal material, vigilance by authorities, parents, and platforms remains critical to protect minors in an increasingly digital environment.

Continue Reading

Tech

Cambridge Index Reveals Global Black Market for Fake Social Media Verifications

Published

on

A new index developed by the University of Cambridge has revealed the scale and affordability of the underground market for fake social media account verifications, raising fresh concerns about online manipulation and digital security. According to researchers, fake verification badges can be purchased for as little as eight cents, enabling the rapid creation of networks that imitate authentic users across major online platforms.

The Cambridge Online Trust and Safety Index (COTSI), launched on Thursday, is described as the first global tool capable of tracking real-time prices for verifying fraudulent accounts. The index monitors more than 500 platforms, including TikTok, Instagram, Amazon, Spotify and Uber. By analysing data from sellers operating across the dark web and black-market channels, the project highlights how accessible and inexpensive these services have become.

Researchers say the low cost of creating fake accounts is contributing to the rise of “bot armies” — large groups of automated or semi-automated profiles designed to mimic genuine human activity. These networks can distort online conversations, amplify misleading content, and promote scams or commercial products. They can also be deployed to influence political messaging, creating an illusion of public support or opposition during major events such as elections or policy debates.

The team behind the index said the findings come at a sensitive time for governments and regulators working to contain misinformation. Many popular platforms have reduced investment in content monitoring during the past two years, while others have introduced programmes that reward users for generating high volumes of engagement. Researchers warn that such incentives may encourage the use of artificially inflated interactions, making fake accounts even more valuable to those seeking influence.

See also  Experts Question Impact of Australia’s New Social Media Ban for Children Under 16

According to Cambridge analysts, the market for fraudulent verification has become highly sophisticated. Sellers offer tiered packages, guaranteeing features such as blue-badge symbols, verified rankings or the appearance of longstanding account history. Prices vary by platform and country, but the index shows that even the most complex packages remain within easy reach for groups attempting to manipulate public debate or carry out coordinated campaigns.

The launch of COTSI marks the first attempt to document these prices on a global scale. By presenting live data on the cost of creating fake identities, researchers hope to give policymakers, technology companies and security agencies a clearer picture of how digital manipulation is evolving. The study’s authors stress that tracking these markets is essential for understanding the risks posed by unauthenticated accounts, particularly during periods of political tension.

The university said the index will be updated regularly and will remain publicly accessible as part of its efforts to strengthen digital transparency worldwide.

Continue Reading

Trending