Tech
Microsoft Unveils ‘Mico’: A Friendly New Face for Copilot Assistant
Nearly three decades after Clippy — the animated paperclip that became both famous and infamous for interrupting Microsoft Office users — Microsoft has introduced a new digital companion called Mico, a floating cartoon face designed to represent its Copilot assistant.
Unlike Clippy, which was often criticised for being intrusive, Mico is meant to be subtle, expressive and user-friendly. The character, shaped like a glowing blob or flame, reacts to conversations by changing colour and expression — smiling, frowning, or spinning with excitement.
Jacob Andreou, Microsoft’s corporate vice president of product and growth, described Mico as a step toward making technology more relatable without being overbearing. “When you talk about something sad, you can see Mico’s face change,” he told the Associated Press. “It’s about creating a companion you can really feel.”
Currently available only to U.S. users on laptops and mobile apps, Mico can be turned off easily — a feature that sets it apart from its predecessor, Clippy, which was notorious for popping up uninvited.
Experts suggest that the timing is right for such an innovation. “Microsoft pushed Clippy; we resisted it, and they got rid of it,” said Bryan Reimer, a research scientist at the Massachusetts Institute of Technology and co-author of How to Make AI Useful. “I think we’re much more ready for things like that today.”
Reimer explained that digital assistants with personality can help users feel more comfortable, especially those who might distrust purely mechanical interactions. “People who are less trustful of machines respond better to technology that feels a little more human,” he said.
Microsoft’s approach stands apart from others in the industry. While some companies are introducing flirtatious or overly human-like avatars, and others have opted for neutral, faceless designs, Microsoft says it wants Mico to strike a balance — engaging but not addictive.
Andreou emphasised that Mico is designed to be “genuinely useful,” not manipulative. “We don’t want it to just tell users what they want to hear or monopolise their attention,” he said.
The company also rolled out new Copilot features, including the ability to join group chats and a “voice-enabled Socratic tutor” for students — a move aimed at making its tools more educational and collaborative.
As more children and teenagers turn to digital assistants for learning and emotional support, regulators have raised concerns about potential risks. While Microsoft was not among the companies recently investigated by the U.S. Federal Trade Commission, the tech giant says it is prioritising safety and responsible design.
With Mico, Microsoft seems to be revisiting the idea behind Clippy — but this time, with a softer touch and a sharper understanding of what users actually want.
Tech
Cambridge Index Reveals Global Black Market for Fake Social Media Verifications
A new index developed by the University of Cambridge has revealed the scale and affordability of the underground market for fake social media account verifications, raising fresh concerns about online manipulation and digital security. According to researchers, fake verification badges can be purchased for as little as eight cents, enabling the rapid creation of networks that imitate authentic users across major online platforms.
The Cambridge Online Trust and Safety Index (COTSI), launched on Thursday, is described as the first global tool capable of tracking real-time prices for verifying fraudulent accounts. The index monitors more than 500 platforms, including TikTok, Instagram, Amazon, Spotify and Uber. By analysing data from sellers operating across the dark web and black-market channels, the project highlights how accessible and inexpensive these services have become.
Researchers say the low cost of creating fake accounts is contributing to the rise of “bot armies” — large groups of automated or semi-automated profiles designed to mimic genuine human activity. These networks can distort online conversations, amplify misleading content, and promote scams or commercial products. They can also be deployed to influence political messaging, creating an illusion of public support or opposition during major events such as elections or policy debates.
The team behind the index said the findings come at a sensitive time for governments and regulators working to contain misinformation. Many popular platforms have reduced investment in content monitoring during the past two years, while others have introduced programmes that reward users for generating high volumes of engagement. Researchers warn that such incentives may encourage the use of artificially inflated interactions, making fake accounts even more valuable to those seeking influence.
According to Cambridge analysts, the market for fraudulent verification has become highly sophisticated. Sellers offer tiered packages, guaranteeing features such as blue-badge symbols, verified rankings or the appearance of longstanding account history. Prices vary by platform and country, but the index shows that even the most complex packages remain within easy reach for groups attempting to manipulate public debate or carry out coordinated campaigns.
The launch of COTSI marks the first attempt to document these prices on a global scale. By presenting live data on the cost of creating fake identities, researchers hope to give policymakers, technology companies and security agencies a clearer picture of how digital manipulation is evolving. The study’s authors stress that tracking these markets is essential for understanding the risks posed by unauthenticated accounts, particularly during periods of political tension.
The university said the index will be updated regularly and will remain publicly accessible as part of its efforts to strengthen digital transparency worldwide.
Tech
Experts Question Impact of Australia’s New Social Media Ban for Children Under 16
Australia has introduced sweeping restrictions that prevent children under 16 from creating or maintaining accounts on major social media platforms, but experts warn the measures may not significantly change young people’s online behaviour. The restrictions, which took effect on December 10, apply to platforms including Facebook, Instagram, TikTok, Snapchat, YouTube, Twitch, Reddit and X.
Under the new rules, children cannot open accounts, yet they can still access most platforms without logging in—raising questions about how effective the regulations will be in shaping online habits. The eSafety Commissioner says the reforms are intended to shield children from online pressures, addictive design features and content that may harm their health and wellbeing.
Social media companies are required to block underage users through age-assurance tools that rely on facial-age estimation, ID uploads or parental consent. Ahead of the rollout, authorities tested 60 verification systems across 28,500 facial recognition assessments. The results showed that while many tools could distinguish children from adults, accuracy declined among users aged 16 and 17, girls and non-Caucasian users, where estimates could be off by two years or more. Experts say the limitations mean many teenagers may still find ways around the rules.
“How do they know who is 14 or 15 when the kids have all signed up as being 75?” asked Sonia Livingstone, a social psychology professor at the London School of Economics. She warned that misclassifications will be common as platforms attempt to enforce the regulations.
Meta acknowledged the challenge, saying complete accuracy is unlikely without requiring every user to present government ID—something the company argues would raise privacy and security concerns. Users over 16 who lose access by mistake are allowed to appeal.
Several platforms have criticised the ban, arguing that it removes teenagers from safer, controlled environments. Meta and Google representatives told Australian lawmakers that logged-in teenage accounts already come with protections that limit contact from unknown users, filter sensitive subjects and disable personalised advertising. Experts say these protections are not always effective, citing studies where new YouTube and TikTok accounts quickly received misogynistic or self-harm-related content.
Analysts expect many teenagers to shift to smaller or lesser-regulated platforms. Apps such as Lemon8, Coverstar and Tango have surged into Australia’s top downloads since the start of December. Messaging apps like WhatsApp, Telegram and Signal—exempt from the ban—have also seen a spike in downloads. Livingstone said teenagers will simply “find alternative spaces,” noting that previous bans in other countries pushed young users to new platforms within days.
Researchers caution that gaming platforms such as Discord and Roblox, also outside the scope of the ban, may become new gathering points for young Australians. Studies will be conducted to assess the long-term impact on mental health and whether the restrictions support or complicate parents’ efforts to regulate screen time.
Experts say it may take several years to determine whether the ban delivers meaningful improvements to children’s wellbeing.
Tech
OECD Warns of Sharp Rise in Cyberbullying Across Europe
-
Entertainment1 year agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Business1 year agoCarrectly: Revolutionizing Car Care in Chicago
-
Sports1 year agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business1 year agoSaudi Arabia: Foreign Direct Investment Rises by 5.6% in Q1
