Connect with us

Tech

TikTok Launches Crowd-Sourced Fact-Checking Tool ‘Footnotes’ in U.S.

Published

on

TikTok has rolled out a new crowd-sourced fact-checking feature in the United States, joining other major social media platforms in enlisting users to help verify content.

The tool, called Footnotes, allows users to add contextual notes to videos and vote on whether other notes should appear. According to TikTok, these footnotes can include expert perspectives on complex topics or additional data to give audiences a more complete understanding of events.

The approach mirrors similar initiatives on platforms like X (formerly Twitter) and Meta’s Facebook and Instagram, where community-driven notes have been used to counter misinformation. X introduced its version, originally called Birdwatch, in 2021 and continued it after Elon Musk’s takeover. Meta launched its own programme earlier this year.

Experts say the move reflects a broader trend toward moderation models that emphasize free speech while limiting platform intervention. Otavio Vinhas, a researcher at Brazil’s National Institute of Science and Technology, links the shift to political pressures — particularly in the U.S. — to reduce corporate control over online speech.

Supporters of crowd-sourced moderation point to research suggesting that, when evaluating factual accuracy, large groups can often match professional fact-checkers in identifying reliable information. However, Vinhas notes that TikTok’s version is stricter than others, requiring users to cite sources for their notes — something not mandatory on X.

Still, visibility remains a hurdle. Scott Hale, associate professor at the Oxford Internet Institute, said that most notes on all platforms are never seen. This is due in part to algorithms that test whether people with differing viewpoints find the same note helpful before displaying it publicly. A study by the Digital Democracy Institute of the Americas found that over 90% of 1.7 million English and Spanish notes on X never appeared on the platform, with those that did averaging a two-week delay before publication.

See also  Nvidia Executive: Humanoid Robots Are the Next Frontier in AI, and They're Coming Soon

Hale warns that echo chambers — where users primarily see content that confirms their beliefs — make it difficult for contradicting notes to gain traction. He suggests “gamifying” contributions, similar to Wikipedia’s reward and recognition systems, to encourage greater participation and visibility.

Crowd-sourced notes are just one tool in social media’s moderation toolkit. Platforms like Meta, X, and TikTok also rely on automated systems to flag potential violations, as well as professional fact-checkers to verify claims, often in real time during political or social crises.

Both Hale and Vinhas agree that professional and community-based fact-checking can complement each other — combining grassroots engagement with the depth of trained investigators. For now, TikTok says Footnotes will contribute to a broader global fact-checking programme, though it has not confirmed long-term plans for expansion.

Tech

Study Finds AI Models Get Basic Math Wrong Around 40 Percent of the Time

Published

on

Artificial intelligence (AI) tools are increasingly used for everyday calculations, but a new study suggests users should approach their answers with caution. Researchers from the Omni Research on Calculation in AI (ORCA) found that when tested on 500 real-world math prompts, AI models had roughly a 40 percent chance of producing an incorrect result.

The study evaluated five widely used AI systems in October 2025: ChatGPT-5 (OpenAI), Gemini 2.5 Flash (Google), Claude 4.5 Sonnet (Anthropic), DeepSeek V3.2 (DeepSeek AI), and Grok-4 (xAI). None of the models scored above 63 percent overall, with Gemini leading at 63 percent, Grok close behind at 62.8 percent, and DeepSeek at 52 percent. ChatGPT-5 scored 49.4 percent, while Claude trailed at 45.2 percent. The average accuracy across all five models was 54.5 percent.

“Although the exact rankings might shift if we repeated the benchmark today, the broader conclusion would likely remain the same: numerical reliability remains a weak spot across current AI models,” said Dawid Siuda, co-author of the ORCA Benchmark.

Performance varied across categories. AI models performed best in basic math and conversions, with Gemini achieving 83 percent accuracy and Grok 76.9 percent. ChatGPT-5 scored 66.7 percent in the same category, giving a combined average of 72.1 percent—the highest across the seven tested categories. Physics proved the most challenging, with overall accuracy dropping to 35.8 percent. Grok led this category at 43.8 percent, while Claude scored just 26.6 percent.

Some AI systems struggled more than others in specific fields. DeepSeek recorded only 10.6 percent accuracy in biology and chemistry, meaning it failed nearly nine out of ten questions. In finance and economics, Gemini and Grok reached 76.7 percent, while the other three models scored below 50 percent.

See also  AI Shifts Job Prospects for Young Workers in US — Europe Watches Closely

The study also categorized the types of mistakes AI makes. “Sloppy math” errors, including miscalculations or rounding issues, accounted for 68 percent of mistakes. Faulty logic errors represented 26 percent, reflecting incorrect formulas or assumptions. Misreading instructions accounted for 5 percent, while some AI simply refused to answer. Siuda noted that multi-step calculations with rounding were particularly prone to error.

The research highlights the importance of verifying AI-generated calculations. “If the task is critical, use calculators or proven sources, or at least double-check with another AI,” Siuda advised.

All 500 prompts used in the study had one correct answer and were designed to reflect everyday math tasks, including statistics, finance, physics, and basic arithmetic. The findings indicate that while AI can assist with calculations, it remains unreliable for precise numerical work and users should remain cautious when relying on these tools.

Continue Reading

Tech

Generative AI Adoption Varies Widely Across Europe, Survey Finds

Published

on

The use of generative artificial intelligence (Gen AI) tools such as ChatGPT, Gemini, and Grok has grown significantly across Europe, with millions of people now relying on the technology for personal, work, and educational purposes. These tools can generate new content, including text, images, code, and videos, based on user prompts and patterns learned from existing data.

According to Eurostat, about one-third of Europeans aged 16 to 74 used AI tools at least once in 2025. However, adoption rates vary widely across the continent, with usage ranging from 17 percent in Turkey to 56 percent in Norway. Within the European Union, Denmark leads with 48 percent of people reporting AI use, while Romania has the lowest rate at 18 percent.

Thirteen countries reported that at least two in five people had used Gen AI tools in the three months prior to the survey. These include Switzerland and Estonia (47 percent each), Malta (46 percent), Finland (46 percent), Ireland (45 percent), the Netherlands (45 percent), Cyprus (44 percent), Greece (44 percent), Luxembourg (43 percent), Belgium (42 percent), and Sweden (42 percent).

Conversely, eight countries saw usage fall below 25 percent, including Serbia (19 percent), Italy (20 percent), Bosnia and Herzegovina (20 percent), North Macedonia (22 percent), Bulgaria (23 percent), Poland (23 percent), Turkey (17 percent), and Romania (18 percent). Among major EU economies, Germany (32 percent) and Italy (20 percent) remain below the EU average, while Spain (38 percent) and France (37 percent) slightly exceed it.

Experts say the differences reflect the broader digital landscape and skill levels in each country. Colin van Noordt, a researcher at KU Leuven University in Belgium, told Euronews Next that nations with strong digital foundations, like Denmark and Switzerland, have higher adoption rates because their populations already possess digital skills, frequent internet use, and familiarity with technology.

See also  Samsung Enters Smart Ring Market with Galaxy Ring, Aiming to Compete Without Subscription Fees

“In countries with lower adoption, people often don’t know generative AI exists or are unsure how to use it,” van Noordt said. He added that understanding how AI can be applied in daily life or work, often referred to as “AI literacy,” is a major factor in adoption. Government policies may encourage use, but underlying digital culture and practical skills appear to have a greater impact, he said.

The survey also highlighted differences in how AI is used. Across the EU, personal use (25 percent) exceeds work-related use (15 percent) in every country, though the gap varies. In the Netherlands, personal and work use are nearly equal at 28 percent and 27 percent, respectively. In Greece, 41 percent use AI personally, compared with just 16 percent at work.

Use of AI in formal education is limited, with only 9 percent of Europeans reporting educational use. Sweden and Switzerland lead at 21 percent, while Hungary records just 1 percent. Analysts suggest that uncertainty over practical applications of AI continues to limit workplace and educational adoption.

The Eurostat data underscores a clear north–south and west–east divide in Gen AI adoption, with Nordic and digitally advanced countries leading the way and southern, central-eastern, and Balkan nations trailing.

Continue Reading

Tech

As AI Hype Fades, Analysts Say ‘Boring’ Tools May Last Longer Online

Published

on

After a year of intense attention on flashy AI applications, analysts are noting a shift in user experience, with practical, low-profile tools likely to have a longer-term impact than more sensational AI offerings.

In 2025, “AI slop”—low-quality or unwanted AI-generated content—became a major feature of the Internet. From confusing chatbots to nonsensical product summaries, AI slop appeared across search engines, e-commerce platforms, and even official communications. Online media and consumer intelligence firm Meltwater reported that mentions of “AI slop” grew ninefold this year compared to 2024, with negative sentiment peaking at 54 percent in October. According to SEO firm Graphite, AI-generated content now represents more than half of all English-language material online. The term was even named Word of the Year 2025 by Merriam-Webster and Australia’s national dictionary.

Analysts warn that much of this content reflects “solution-led design,” where technology is added first, then products are built to justify it. Kate Moran, vice president of research at Nielsen Norman Group, said companies have often introduced AI in ways that confuse users rather than solve problems. She cited Meta’s AI search feature on Instagram, which replaced the traditional search bar and was quickly rolled back after user backlash. Consumer AI hardware, such as the Humane AI Pin, also received negative reviews, suggesting that “solutions are being built for problems that don’t exist,” according to Logitech CEO Hanneke Faber.

Even as some firms continue to launch flashy AI apps, user engagement has been muted. Meta introduced its AI video app “Vibes” in Europe this year, but early reports indicate just 23,000 daily users across the continent, concentrated in France, Italy, and Spain. This contrasts with the company’s previous efforts to prioritize “authentic storytelling” over low-value AI-generated content.

See also  Samsung Enters Smart Ring Market with Galaxy Ring, Aiming to Compete Without Subscription Fees

Experts say that practical, low-interaction AI features may be more effective in improving user experience. Moran highlighted Amazon’s AI-generated summaries of product reviews as a valuable example, providing quick insights without requiring user input. Similarly, Daniel Mügge, a researcher at the University of Amsterdam, argued that European tech investment should prioritize AI applications that solve concrete problems in robotics, manufacturing, or other sectors, rather than tools that amplify advertising or create low-quality content.

Platforms like Pinterest and YouTube are already responding to user frustration by allowing people to limit AI-generated content. Analysts say these “boring” but useful tools are shaping a more intentional approach to AI design.

“Smaller, specialized AI products can make a real difference for users without grabbing headlines,” Moran said. Mügge added that focusing on practical applications allows smaller companies to contribute meaningfully while avoiding a direct race with dominant AI developers.

As the AI hype cools, analysts agree that thoughtful, problem-focused tools are likely to outlast flashy applications, shaping the future of the Internet in ways that matter to everyday users.

Continue Reading

Trending