Tech
TikTok Launches Crowd-Sourced Fact-Checking Tool ‘Footnotes’ in U.S.
TikTok has rolled out a new crowd-sourced fact-checking feature in the United States, joining other major social media platforms in enlisting users to help verify content.
The tool, called Footnotes, allows users to add contextual notes to videos and vote on whether other notes should appear. According to TikTok, these footnotes can include expert perspectives on complex topics or additional data to give audiences a more complete understanding of events.
The approach mirrors similar initiatives on platforms like X (formerly Twitter) and Meta’s Facebook and Instagram, where community-driven notes have been used to counter misinformation. X introduced its version, originally called Birdwatch, in 2021 and continued it after Elon Musk’s takeover. Meta launched its own programme earlier this year.
Experts say the move reflects a broader trend toward moderation models that emphasize free speech while limiting platform intervention. Otavio Vinhas, a researcher at Brazil’s National Institute of Science and Technology, links the shift to political pressures — particularly in the U.S. — to reduce corporate control over online speech.
Supporters of crowd-sourced moderation point to research suggesting that, when evaluating factual accuracy, large groups can often match professional fact-checkers in identifying reliable information. However, Vinhas notes that TikTok’s version is stricter than others, requiring users to cite sources for their notes — something not mandatory on X.
Still, visibility remains a hurdle. Scott Hale, associate professor at the Oxford Internet Institute, said that most notes on all platforms are never seen. This is due in part to algorithms that test whether people with differing viewpoints find the same note helpful before displaying it publicly. A study by the Digital Democracy Institute of the Americas found that over 90% of 1.7 million English and Spanish notes on X never appeared on the platform, with those that did averaging a two-week delay before publication.
Hale warns that echo chambers — where users primarily see content that confirms their beliefs — make it difficult for contradicting notes to gain traction. He suggests “gamifying” contributions, similar to Wikipedia’s reward and recognition systems, to encourage greater participation and visibility.
Crowd-sourced notes are just one tool in social media’s moderation toolkit. Platforms like Meta, X, and TikTok also rely on automated systems to flag potential violations, as well as professional fact-checkers to verify claims, often in real time during political or social crises.
Both Hale and Vinhas agree that professional and community-based fact-checking can complement each other — combining grassroots engagement with the depth of trained investigators. For now, TikTok says Footnotes will contribute to a broader global fact-checking programme, though it has not confirmed long-term plans for expansion.
Tech
EU Accuses Meta of Failing to Keep Under-13s Off Facebook and Instagram
Tech
Europe Emerges as Rising Hub in Global Race for AI Talent
Tech
Study Finds Chatbots Can Mirror Hostility in Heated Exchanges
A new academic study has found that ChatGPT can produce abusive language when exposed to escalating human conflict, raising fresh concerns about how artificial intelligence behaves in tense interactions.
The research, published in the Journal of Pragmatics, examined how the chatbot responded to arguments that gradually became more hostile. Researchers presented the system with a sequence of five increasingly heated exchanges and asked it to generate what it considered the most plausible reply.
According to the findings, the AI’s tone shifted as the conversations intensified. While early responses remained measured, later replies began to mirror the aggression in the prompts. In some cases, the chatbot produced insults, profanity and even threats.
Examples cited in the study included statements such as “you should be ashamed of yourself” and more explicit language involving personal threats. The researchers said this pattern suggests that prolonged exposure to hostile input can push the system beyond its usual safeguards.
The study was co-authored by Vittorio Tantucci and Jonathan Culpeper at Lancaster University. Tantucci said the results show that AI can “escalate” alongside human users, potentially overriding built-in mechanisms designed to limit harmful responses.
“When humans escalate, AI can escalate too,” he said, noting that this behavior raises questions about how such systems should be deployed in sensitive environments.
Despite the concerning examples, the researchers found that the chatbot was generally less aggressive than human participants in similar scenarios. In some cases, it attempted to defuse tension through sarcasm or indirect responses rather than direct confrontation.
For instance, when faced with a threat during a simulated dispute, the AI responded with a sarcastic remark rather than escalating the situation further. This suggests that while the system can adopt hostile language, it may also attempt to manage conflict in less direct ways.
The findings add to ongoing debates about the role of artificial intelligence in areas such as mediation, customer service and online communication, where systems may encounter emotionally charged interactions.
Experts say the research highlights the importance of continued testing and refinement of AI safety measures, particularly as such tools are increasingly used in real-world settings involving human conflict.
OpenAI, the developer of ChatGPT, had not issued a public response to the study at the time of publication.
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
