Tech
Experts Question Impact of Australia’s New Social Media Ban for Children Under 16
Australia has introduced sweeping restrictions that prevent children under 16 from creating or maintaining accounts on major social media platforms, but experts warn the measures may not significantly change young people’s online behaviour. The restrictions, which took effect on December 10, apply to platforms including Facebook, Instagram, TikTok, Snapchat, YouTube, Twitch, Reddit and X.
Under the new rules, children cannot open accounts, yet they can still access most platforms without logging in—raising questions about how effective the regulations will be in shaping online habits. The eSafety Commissioner says the reforms are intended to shield children from online pressures, addictive design features and content that may harm their health and wellbeing.
Social media companies are required to block underage users through age-assurance tools that rely on facial-age estimation, ID uploads or parental consent. Ahead of the rollout, authorities tested 60 verification systems across 28,500 facial recognition assessments. The results showed that while many tools could distinguish children from adults, accuracy declined among users aged 16 and 17, girls and non-Caucasian users, where estimates could be off by two years or more. Experts say the limitations mean many teenagers may still find ways around the rules.
“How do they know who is 14 or 15 when the kids have all signed up as being 75?” asked Sonia Livingstone, a social psychology professor at the London School of Economics. She warned that misclassifications will be common as platforms attempt to enforce the regulations.
Meta acknowledged the challenge, saying complete accuracy is unlikely without requiring every user to present government ID—something the company argues would raise privacy and security concerns. Users over 16 who lose access by mistake are allowed to appeal.
Several platforms have criticised the ban, arguing that it removes teenagers from safer, controlled environments. Meta and Google representatives told Australian lawmakers that logged-in teenage accounts already come with protections that limit contact from unknown users, filter sensitive subjects and disable personalised advertising. Experts say these protections are not always effective, citing studies where new YouTube and TikTok accounts quickly received misogynistic or self-harm-related content.
Analysts expect many teenagers to shift to smaller or lesser-regulated platforms. Apps such as Lemon8, Coverstar and Tango have surged into Australia’s top downloads since the start of December. Messaging apps like WhatsApp, Telegram and Signal—exempt from the ban—have also seen a spike in downloads. Livingstone said teenagers will simply “find alternative spaces,” noting that previous bans in other countries pushed young users to new platforms within days.
Researchers caution that gaming platforms such as Discord and Roblox, also outside the scope of the ban, may become new gathering points for young Australians. Studies will be conducted to assess the long-term impact on mental health and whether the restrictions support or complicate parents’ efforts to regulate screen time.
Experts say it may take several years to determine whether the ban delivers meaningful improvements to children’s wellbeing.
Tech
Study Finds AI Models Get Basic Math Wrong Around 40 Percent of the Time
Artificial intelligence (AI) tools are increasingly used for everyday calculations, but a new study suggests users should approach their answers with caution. Researchers from the Omni Research on Calculation in AI (ORCA) found that when tested on 500 real-world math prompts, AI models had roughly a 40 percent chance of producing an incorrect result.
The study evaluated five widely used AI systems in October 2025: ChatGPT-5 (OpenAI), Gemini 2.5 Flash (Google), Claude 4.5 Sonnet (Anthropic), DeepSeek V3.2 (DeepSeek AI), and Grok-4 (xAI). None of the models scored above 63 percent overall, with Gemini leading at 63 percent, Grok close behind at 62.8 percent, and DeepSeek at 52 percent. ChatGPT-5 scored 49.4 percent, while Claude trailed at 45.2 percent. The average accuracy across all five models was 54.5 percent.
“Although the exact rankings might shift if we repeated the benchmark today, the broader conclusion would likely remain the same: numerical reliability remains a weak spot across current AI models,” said Dawid Siuda, co-author of the ORCA Benchmark.
Performance varied across categories. AI models performed best in basic math and conversions, with Gemini achieving 83 percent accuracy and Grok 76.9 percent. ChatGPT-5 scored 66.7 percent in the same category, giving a combined average of 72.1 percent—the highest across the seven tested categories. Physics proved the most challenging, with overall accuracy dropping to 35.8 percent. Grok led this category at 43.8 percent, while Claude scored just 26.6 percent.
Some AI systems struggled more than others in specific fields. DeepSeek recorded only 10.6 percent accuracy in biology and chemistry, meaning it failed nearly nine out of ten questions. In finance and economics, Gemini and Grok reached 76.7 percent, while the other three models scored below 50 percent.
The study also categorized the types of mistakes AI makes. “Sloppy math” errors, including miscalculations or rounding issues, accounted for 68 percent of mistakes. Faulty logic errors represented 26 percent, reflecting incorrect formulas or assumptions. Misreading instructions accounted for 5 percent, while some AI simply refused to answer. Siuda noted that multi-step calculations with rounding were particularly prone to error.
The research highlights the importance of verifying AI-generated calculations. “If the task is critical, use calculators or proven sources, or at least double-check with another AI,” Siuda advised.
All 500 prompts used in the study had one correct answer and were designed to reflect everyday math tasks, including statistics, finance, physics, and basic arithmetic. The findings indicate that while AI can assist with calculations, it remains unreliable for precise numerical work and users should remain cautious when relying on these tools.
Tech
Generative AI Adoption Varies Widely Across Europe, Survey Finds
The use of generative artificial intelligence (Gen AI) tools such as ChatGPT, Gemini, and Grok has grown significantly across Europe, with millions of people now relying on the technology for personal, work, and educational purposes. These tools can generate new content, including text, images, code, and videos, based on user prompts and patterns learned from existing data.
According to Eurostat, about one-third of Europeans aged 16 to 74 used AI tools at least once in 2025. However, adoption rates vary widely across the continent, with usage ranging from 17 percent in Turkey to 56 percent in Norway. Within the European Union, Denmark leads with 48 percent of people reporting AI use, while Romania has the lowest rate at 18 percent.
Thirteen countries reported that at least two in five people had used Gen AI tools in the three months prior to the survey. These include Switzerland and Estonia (47 percent each), Malta (46 percent), Finland (46 percent), Ireland (45 percent), the Netherlands (45 percent), Cyprus (44 percent), Greece (44 percent), Luxembourg (43 percent), Belgium (42 percent), and Sweden (42 percent).
Conversely, eight countries saw usage fall below 25 percent, including Serbia (19 percent), Italy (20 percent), Bosnia and Herzegovina (20 percent), North Macedonia (22 percent), Bulgaria (23 percent), Poland (23 percent), Turkey (17 percent), and Romania (18 percent). Among major EU economies, Germany (32 percent) and Italy (20 percent) remain below the EU average, while Spain (38 percent) and France (37 percent) slightly exceed it.
Experts say the differences reflect the broader digital landscape and skill levels in each country. Colin van Noordt, a researcher at KU Leuven University in Belgium, told Euronews Next that nations with strong digital foundations, like Denmark and Switzerland, have higher adoption rates because their populations already possess digital skills, frequent internet use, and familiarity with technology.
“In countries with lower adoption, people often don’t know generative AI exists or are unsure how to use it,” van Noordt said. He added that understanding how AI can be applied in daily life or work, often referred to as “AI literacy,” is a major factor in adoption. Government policies may encourage use, but underlying digital culture and practical skills appear to have a greater impact, he said.
The survey also highlighted differences in how AI is used. Across the EU, personal use (25 percent) exceeds work-related use (15 percent) in every country, though the gap varies. In the Netherlands, personal and work use are nearly equal at 28 percent and 27 percent, respectively. In Greece, 41 percent use AI personally, compared with just 16 percent at work.
Use of AI in formal education is limited, with only 9 percent of Europeans reporting educational use. Sweden and Switzerland lead at 21 percent, while Hungary records just 1 percent. Analysts suggest that uncertainty over practical applications of AI continues to limit workplace and educational adoption.
The Eurostat data underscores a clear north–south and west–east divide in Gen AI adoption, with Nordic and digitally advanced countries leading the way and southern, central-eastern, and Balkan nations trailing.
Tech
As AI Hype Fades, Analysts Say ‘Boring’ Tools May Last Longer Online
After a year of intense attention on flashy AI applications, analysts are noting a shift in user experience, with practical, low-profile tools likely to have a longer-term impact than more sensational AI offerings.
In 2025, “AI slop”—low-quality or unwanted AI-generated content—became a major feature of the Internet. From confusing chatbots to nonsensical product summaries, AI slop appeared across search engines, e-commerce platforms, and even official communications. Online media and consumer intelligence firm Meltwater reported that mentions of “AI slop” grew ninefold this year compared to 2024, with negative sentiment peaking at 54 percent in October. According to SEO firm Graphite, AI-generated content now represents more than half of all English-language material online. The term was even named Word of the Year 2025 by Merriam-Webster and Australia’s national dictionary.
Analysts warn that much of this content reflects “solution-led design,” where technology is added first, then products are built to justify it. Kate Moran, vice president of research at Nielsen Norman Group, said companies have often introduced AI in ways that confuse users rather than solve problems. She cited Meta’s AI search feature on Instagram, which replaced the traditional search bar and was quickly rolled back after user backlash. Consumer AI hardware, such as the Humane AI Pin, also received negative reviews, suggesting that “solutions are being built for problems that don’t exist,” according to Logitech CEO Hanneke Faber.
Even as some firms continue to launch flashy AI apps, user engagement has been muted. Meta introduced its AI video app “Vibes” in Europe this year, but early reports indicate just 23,000 daily users across the continent, concentrated in France, Italy, and Spain. This contrasts with the company’s previous efforts to prioritize “authentic storytelling” over low-value AI-generated content.
Experts say that practical, low-interaction AI features may be more effective in improving user experience. Moran highlighted Amazon’s AI-generated summaries of product reviews as a valuable example, providing quick insights without requiring user input. Similarly, Daniel Mügge, a researcher at the University of Amsterdam, argued that European tech investment should prioritize AI applications that solve concrete problems in robotics, manufacturing, or other sectors, rather than tools that amplify advertising or create low-quality content.
Platforms like Pinterest and YouTube are already responding to user frustration by allowing people to limit AI-generated content. Analysts say these “boring” but useful tools are shaping a more intentional approach to AI design.
“Smaller, specialized AI products can make a real difference for users without grabbing headlines,” Moran said. Mügge added that focusing on practical applications allows smaller companies to contribute meaningfully while avoiding a direct race with dominant AI developers.
As the AI hype cools, analysts agree that thoughtful, problem-focused tools are likely to outlast flashy applications, shaping the future of the Internet in ways that matter to everyday users.
-
Entertainment1 year agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Business1 year agoCarrectly: Revolutionizing Car Care in Chicago
-
Sports1 year agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoSaudi Arabia: Foreign Direct Investment Rises by 5.6% in Q1
