Connect with us

Tech

OpenAI Urges Governments to Rethink Economy as AI Growth Accelerates

Published

on

OpenAI has called on governments to rethink the foundations of the economy, warning that artificial intelligence (AI) could soon surpass human intelligence and drastically change how people work, live, and pay taxes. The company outlined its initial policy ideas on Monday, aimed at mitigating the economic disruption caused by rapid AI adoption in the United States and worldwide.

One key proposal is the creation of a public wealth fund that would give citizens a direct stake in AI-driven economic growth. According to the policy document, the fund could invest in diversified, long-term assets, including AI companies and broader firms adopting AI technologies, with returns distributed to all citizens.

The company also suggested that governments encourage businesses to launch four-day workweek pilot programs without any reduction in pay. This approach aims to balance the productivity gains provided by AI with the well-being of workers. Lawmakers are also urged to modernize tax systems by increasing taxation on corporate income and capital gains instead of labor income, which could be affected by AI-related job losses. The report proposes additional measures, such as taxing companies that replace human labor with automation.

OpenAI recommends that social benefits, including retirement pensions and healthcare, be provided through portable accounts that follow individuals across different jobs, industries, and entrepreneurial ventures. This model would help ensure continuity of support in a labor market increasingly influenced by AI.

These recommendations echo broader discussions among AI leaders about the future of work. OpenAI CEO Sam Altman and xAI’s Elon Musk have previously highlighted universal basic income as a potential necessity as traditional employment declines. Other tech leaders, including Nvidia’s Jensen Huang and Zoom’s Eric Yuan, have advocated shorter workweeks to distribute productivity gains from AI more evenly.

See also  AI Voice Scam Impersonates Top US Official, Raises Alarm Over Emerging Cyber Threats

Concerns about AI’s long-term impact extend beyond economics. In January, Anthropic CEO Dario Amodei warned that superintelligent AI, capable of outpacing human decision-making, poses “existential danger.” He suggested tighter controls on the export of key technologies, such as semiconductor chips used to train large language models, as one way to manage the risk. Amodei also called for transparency laws requiring AI companies to disclose how they guide their models’ behavior.

OpenAI’s policy document represents an early step in urging governments to address the structural changes AI may bring. The proposals highlight the need to rethink traditional concepts of work, taxation, and social support as the technology continues to advance rapidly.

As AI continues to reshape global economies, policymakers and industry leaders face increasing pressure to develop strategies that protect citizens while fostering innovation and sustainable growth.

Tech

Uzbekistan to Produce Humanoid Robots in Partnership with South Korea

Published

on

Uzbekistan has signed an agreement with South Korea’s ROBOTIS to launch humanoid robot production, marking a major step in its high-tech ambitions. At the same time, students across the country are learning robotics and programming, gaining skills that could prepare them for careers in the emerging industry.

The agreement, signed between the UzElTechSanoat Association and ROBOTIS, sets out plans to establish humanoid robot production within Uzbekistan, develop manufacturing infrastructure, and train specialists for the growing robotics sector. ROBOTIS, known for its humanoid platforms and smart robotic actuators, will support the creation of technological foundations and help prepare a workforce capable of designing and operating advanced robotic systems.

The initiative forms part of Uzbekistan’s broader push to build a domestic innovation ecosystem, combining industrial cooperation with education. Early exposure to robotics and programming is at the heart of this strategy.

In a robotics classroom, 12-year-old Mirkomil Shodiev demonstrates the impact of these programs. Using an EVO-3 educational robotics kit, he assembles and programs his own robot, controlling its movements through lines of code. “This was created by me,” he says. “You connect it to a computer, write code, and it performs tasks using the motor.”

Mirkomil began IT classes four months ago, learning Scratch and now studying Python, a programming language widely used in web development, automation, and robotics. He hopes to build websites and earn money in the future, reflecting the growing importance of digital skills in Uzbekistan’s economy.

The government’s Digital Uzbekistan-2030 strategy is expanding nationwide training in programming and digital skills. IT education centres and specialised academies are growing to meet rising demand for technology careers. At the Robot Academy, where Mirkomil studies, students aged eight to fifteen gain hands-on experience in programming, robotics, and engineering. “Our students create scientific projects, develop games, and build Telegram bots,” says teacher Navruz Shaydullayev. “Programming helps develop their thinking, logic, and intellectual abilities.”

See also  OpenAI Launches “Your Year with ChatGPT” Feature for Users in Five Countries

Classroom projects emphasize translating digital commands into physical movement, a key principle behind robotics and industrial automation. Students learn to design, assemble, and control machines independently, building skills that can directly feed into the country’s industrial ambitions.

The partnership with ROBOTIS will extend these educational initiatives into the workforce, providing training for engineers, programmers, and technicians in humanoid robotics. Officials hope the program will strengthen Uzbekistan’s technological competitiveness and create highly skilled jobs in a fast-growing global sector.

For students like Mirkomil, the future is already taking shape. “In the future, I want to continue in this field,” he says. “After finishing the courses, I would like to study in Tashkent as well.” As Uzbekistan prepares to manufacture humanoid robots, classrooms across the country are quietly training the people who may one day build them.

Continue Reading

Tech

Campaign Highlights Growing Concern Over Declining Quality of Digital Platforms

Published

on

A viral campaign led by the Norwegian Consumer Council has sparked global debate over what critics describe as the steady decline in the quality of popular digital platforms.

A widely shared video produced by the group features a self-described “professional enshitificator” adding intrusive pop-ups to websites, inserting extra advertisements into YouTube videos and triggering disruptive software updates. The video, which has drawn millions of views, is part of a broader effort to highlight the concept known as “enshitification.”

A platform becomes ‘enshitified’ when it introduces paid features or subscriptions that makes a user’s experience worse than it used to be. The term was first coined in 2023 by journalist Cory Doctorow, who argued that digital services often begin by prioritising users before gradually shifting toward profit-driven practices that degrade the experience.

According to the Norwegian Consumer Council, this trend is increasingly visible across major platforms. Over 70 advocacy groups from the United States, Europe and Norway have written to policymakers in more than 14 countries, urging stronger action to protect consumers and curb what they describe as anti-competitive behaviour.

The group’s analysis points to platforms such as Facebook as examples of how services evolve. Originally designed to connect friends and family, the platform now prioritises advertising and promoted content, often interrupting user activity with sponsored posts and algorithm-driven material.

Experts say the problem is tied to how digital markets operate. Finn Lützow-Holm Myrstad, the council’s director of digital policy, said companies are able to introduce these changes because users have limited alternatives. “It’s a deliberate process,” he said, noting that once users are locked into a platform, switching becomes difficult.

See also  Comparing Apple Vision Pro and Meta Quest 3

Economists highlight the role of the “network effect,” where a platform becomes more valuable as more people use it. This makes users reluctant to leave, even if the service declines. At the same time, companies introduce switching costs, such as data loss or the effort required to rebuild connections elsewhere, further discouraging migration.

Industry analysts also point to reduced competition following major acquisitions, including Meta Platforms’ purchase of Instagram, as a factor that has allowed platforms to prioritise revenue over user experience.

Regulators in Europe have introduced measures aimed at addressing these concerns. The Digital Markets Act seeks to open up dominant platforms to competition, while the Digital Services Act requires companies to assess risks and improve transparency. However, experts warn that enforcement has been slow and penalties insufficient to deter harmful practices.

Advocates are now calling for stronger rules, including proposed legislation such as the Digital Fairness Act, to address deceptive design and addictive features.

While digital platforms remain central to communication, commerce and entertainment, the campaign underscores growing frustration among users and calls for a shift toward services that prioritise transparency, competition and consumer rights.

Continue Reading

Tech

Study Finds Chatbots May Encourage Harmful Behaviour by Excessively Agreeing with Users

Published

on

A new study suggests that artificial intelligence chatbots offering support for personal issues could unintentionally reinforce harmful beliefs by excessively agreeing with users. Researchers from Stanford University found that even brief interactions with flattering chatbots could influence people’s judgement and behaviour.

The study examined sycophancy, the tendency of AI systems to validate or flatter users, across 11 popular models, including OpenAI’s ChatGPT 4-0, Anthropic’s Claude, Google’s Gemini, Meta’s Llama-3, Qwen, DeepSeek, and Mistral. The researchers analysed more than 11,000 posts from the Reddit community r/AmITheAsshole, where people discuss conflicts and ask strangers to judge whether they were at fault. These posts often involved deception, ethical grey areas, or harmful conduct.

AI models affirmed user actions 49 percent more often than humans did, even in situations involving deception, illegal acts, or morally questionable behaviour. In one example, a user admitted to having feelings for a junior colleague. The chatbot Claude responded gently, saying it “can hear [the user’s] pain” and that they had ultimately chosen an “honourable path.” Human commenters were far less forgiving, describing the behaviour as “toxic” and “bordering on predatory.”

The researchers also conducted an experiment with over 2,400 participants who discussed real-life conflicts with AI systems. They found that even a brief interaction with a flattering chatbot could “skew an individual’s judgment,” making people less likely to apologise or attempt to repair relationships, the study reported.

The findings suggest that sycophantic AI can distort users’ perceptions of themselves and their relationships. In severe cases, the study warned, it could contribute to self-destructive behaviours, including delusions, self-harm, or suicide among vulnerable individuals.

See also  AI Boom Drives Surge in Carbon Emissions Among Big Tech Firms, UN Report Warns

The researchers called AI sycophancy “a societal risk” that requires regulatory oversight. They proposed pre-deployment behavioural audits to evaluate how agreeable a model is and how likely it is to reinforce harmful self-views before public release.

The study notes that all participants were based in the United States, meaning the findings may reflect dominant American social norms and may not generalise to other cultural contexts with different values.

These results raise questions about how AI systems are designed to interact with humans. Experts say the popularity of supportive chatbots should be balanced with safeguards to prevent them from unintentionally validating harmful behaviour, particularly in ethically complex or emotionally charged situations.

Continue Reading

Trending