Connect with us

Tech

Yann LeCun to Leave Meta and Launch New AI Venture Focused on Next-Generation Machine Intelligence

Published

on

Yann LeCun, one of the most influential figures in artificial intelligence and a pioneer often referred to as a “godfather of AI,” has announced he will step down from Meta at the end of the year to launch a new machine learning company. The move marks the end of a 12-year tenure during which he played a central role in shaping Meta’s AI research efforts.

In a message posted on LinkedIn on Wednesday, LeCun said his departure follows five years as the founding director of Meta’s AI research lab, FAIR, and seven years as the company’s chief AI scientist. He said the time had come to pursue an independent path focused on advancing research he believes will drive the next major leap in artificial intelligence.

LeCun has been increasingly outspoken about what he sees as the limitations of large language models, the technology behind systems such as ChatGPT and Meta’s Llama. Despite Meta investing billions of dollars in LLM development, he has argued that these models will not lead to true machine intelligence.

“LLMs are great, they’re useful, we should invest in them — a lot of people are going to use them,” he said during an event on Sunday. “They are not a path to human-level intelligence. They’re just not.” He added that the dominance of LLM-focused research had drained resources from exploring alternative approaches, which he considers essential for long-term progress.

LeCun has long advocated for “world models,” a different form of AI that learns using visual data such as videos. Unlike language models that predict the next word in a sequence, world models attempt to predict what happens next in a physical environment, allowing systems to develop an understanding of cause-and-effect. He views this approach as key to building machines capable of reasoning, planning, and interacting more naturally with the real world.

See also  AI Boom Exposes Global Talent Shortage as Investment Soars and Safety Concerns Mount

His new company will focus on developing the Advanced Machine Intelligence (AMI) program he had been working on at FAIR and New York University. LeCun said the goal is to create systems that can understand the physical world, maintain persistent memory, and plan complex actions. The start-up, called AMI — also the French word for “friend” — will pursue applications across multiple industries.

While the company’s work may intersect with Meta’s interests in some areas, LeCun noted that many applications will fall outside Meta’s commercial scope. He said launching AMI as an independent venture would allow the research to have wider impact. He also expressed appreciation to Meta CEO Mark Zuckerberg, adding that Meta will serve as a partner in the new venture, though he did not detail the specifics of that partnership.

Tech

Militant Groups Adopt AI to Spread Propaganda and Boost Recruitment

Published

on

Extremist organisations have begun using artificial intelligence (AI) to create realistic images, videos, and audio in efforts to recruit members and amplify their influence, national security experts warn. Since programs such as ChatGPT became widely accessible, militant groups have increasingly experimented with generative AI, despite being unsure how to fully exploit its potential.

Recent reports show that individuals linked to the Islamic State (IS) have encouraged supporters to integrate AI into their operations. One post on a pro-IS forum urged users to make “AI part of their operations,” noting its ease of use and potential to cause concern among intelligence agencies.

IS, which once controlled territory in Iraq and Syria, is now a decentralized network of groups and individuals sharing a violent ideology. The organisation recognized years ago that social media could be a powerful recruitment and propaganda tool, making AI a natural extension of its digital tactics. Even poorly resourced groups or individual actors can now use AI to produce deepfakes and other fabricated content at scale, widening their reach and impact.

“For any adversary, AI really makes it much easier to do things,” said John Laliberte, former NSA vulnerability researcher and CEO of cybersecurity firm ClearVector. “With AI, even a small group that doesn’t have a lot of money is still able to make an impact.”

Militant groups have already used AI-generated content to influence public perception. Two years ago, during the Israel-Hamas conflict, fabricated images showing bloodied children in bombed-out buildings circulated widely online, stirring outrage and polarising audiences. Last year, following an IS-affiliated attack at a Russian concert that killed nearly 140 people, AI-crafted propaganda videos spread rapidly on social media and discussion boards. IS has also produced deepfake audio of leaders reciting scripture and quickly translated messages into multiple languages.

See also  Wikipedia Thrives in the AI Era, But Researchers Warn of New Challenges from Data Scraping

Experts caution that, while extremist groups are still behind nations like China, Russia, or Iran in sophisticated AI applications, their use of the technology is considered “aspirational” but dangerous. Hackers are already using synthetic media for phishing attacks, and AI can also help write malicious code or automate parts of cyberattacks. Homeland security agencies warn that militants could one day use AI to compensate for technical limitations in producing biological or chemical weapons.

Lawmakers are seeking to address the growing threat. Senator Mark Warner of Virginia stressed the need for AI developers to share information on misuse by extremists, hackers, or foreign spies. House legislation now requires homeland security officials to assess the risks AI poses to terrorist groups annually. Representative August Pfluger, who sponsored the bill, said policies must evolve to counter emerging threats.

Marcus Fowler, former CIA agent and CEO of Darktrace Federal, highlighted the urgency: “ISIS got on Twitter early and found ways to use social media to their advantage. They are always looking for the next thing to add to their arsenal.”

As AI becomes increasingly powerful and accessible, security experts warn that militant groups’ ability to manipulate the technology for recruitment, propaganda, and cyber operations is a threat that governments and tech companies cannot ignore.

Continue Reading

Tech

Report reveals AI-generated videos of children circulating on TikTok, linked to illegal content on Telegram

Published

on

A recent investigation has found that AI-generated videos showing young girls in sexualised clothing or suggestive poses have gained widespread attention on TikTok, raising serious concerns about child exploitation online. The Spanish fact-checking organisation Maldita analysed over 5,200 videos across more than 20 accounts, which collectively have more than 550,000 followers and nearly six million likes. Many videos featured girls in bikinis, school uniforms, or tight clothing.

Maldita’s analysis also revealed that comments on these videos contained links to external platforms, including Telegram communities that sell child pornography. The organisation reported 12 such groups to Spanish authorities. The TikTok accounts involved were generating revenue through the platform’s subscription model, which pays creators monthly fees for access to their content. TikTok receives about half of the profits under this arrangement.

The report comes amid global efforts to protect minors online. Countries including Australia, Denmark, and the European Union are introducing or considering restrictions for users under 16, with the goal of curbing exposure to harmful content. TikTok’s own policies require creators to label AI-generated content and allow the removal of content considered harmful to individuals. Despite this, Maldita found that most of the videos it examined did not include any AI identifiers or watermarks. Some content, however, displayed the platform’s “TikTok AI Alive” watermark, which is automatically applied when still images are converted into videos.

In response to the findings, both Telegram and TikTok emphasised their commitment to preventing child sexual abuse material. Telegram stated that it scans all media on its public platform against previously removed content to prevent its spread. In 2025 alone, the platform removed over 909,000 groups and channels containing child sexual abuse material.

See also  Creative Industry Groups Raise Alarm as EU’s AI Act Comes into Force

TikTok said 99 percent of content harmful to minors is removed automatically, with another 97 percent of AI-generated offending content being proactively taken down. The platform said it immediately suppresses or closes accounts that share sexually explicit content involving children and reports them to the United States’ National Center for Missing and Exploited Children (NCMEC). TikTok also told CNN that between April and June 2025, it removed more than 189 million videos and banned over 108 million accounts.

Maldita’s report highlights the challenges social media platforms face in policing AI-generated content and preventing the exploitation of children. Experts warn that while automated tools and moderation can reduce the spread of illegal material, vigilance by authorities, parents, and platforms remains critical to protect minors in an increasingly digital environment.

Continue Reading

Tech

Cambridge Index Reveals Global Black Market for Fake Social Media Verifications

Published

on

A new index developed by the University of Cambridge has revealed the scale and affordability of the underground market for fake social media account verifications, raising fresh concerns about online manipulation and digital security. According to researchers, fake verification badges can be purchased for as little as eight cents, enabling the rapid creation of networks that imitate authentic users across major online platforms.

The Cambridge Online Trust and Safety Index (COTSI), launched on Thursday, is described as the first global tool capable of tracking real-time prices for verifying fraudulent accounts. The index monitors more than 500 platforms, including TikTok, Instagram, Amazon, Spotify and Uber. By analysing data from sellers operating across the dark web and black-market channels, the project highlights how accessible and inexpensive these services have become.

Researchers say the low cost of creating fake accounts is contributing to the rise of “bot armies” — large groups of automated or semi-automated profiles designed to mimic genuine human activity. These networks can distort online conversations, amplify misleading content, and promote scams or commercial products. They can also be deployed to influence political messaging, creating an illusion of public support or opposition during major events such as elections or policy debates.

The team behind the index said the findings come at a sensitive time for governments and regulators working to contain misinformation. Many popular platforms have reduced investment in content monitoring during the past two years, while others have introduced programmes that reward users for generating high volumes of engagement. Researchers warn that such incentives may encourage the use of artificially inflated interactions, making fake accounts even more valuable to those seeking influence.

See also  Kazakhstan Launches Central Asia’s Most Powerful Supercomputer Amid Push for AI Sovereignty

According to Cambridge analysts, the market for fraudulent verification has become highly sophisticated. Sellers offer tiered packages, guaranteeing features such as blue-badge symbols, verified rankings or the appearance of longstanding account history. Prices vary by platform and country, but the index shows that even the most complex packages remain within easy reach for groups attempting to manipulate public debate or carry out coordinated campaigns.

The launch of COTSI marks the first attempt to document these prices on a global scale. By presenting live data on the cost of creating fake identities, researchers hope to give policymakers, technology companies and security agencies a clearer picture of how digital manipulation is evolving. The study’s authors stress that tracking these markets is essential for understanding the risks posed by unauthenticated accounts, particularly during periods of political tension.

The university said the index will be updated regularly and will remain publicly accessible as part of its efforts to strengthen digital transparency worldwide.

Continue Reading

Trending