Connect with us

Tech

Activists Launch Campaign for EU-Funded Social Media Platform

Published

on

A group of activists has begun a campaign calling for the creation of a publicly funded European social media platform, after the European Commission formally registered a European Citizens’ Initiative on the proposal.

The registration allows organisers to begin collecting signatures across the European Union in support of the idea. Under the rules governing such initiatives, campaigners must gather at least one million signatures from citizens in a minimum of seven EU member states.

The signature drive is expected to take up to 12 months once it begins. Campaign organisers have up to six months to prepare the process before collecting support, meaning the entire effort could extend over roughly 18 months.

If the campaign reaches the required threshold, the European Commission would be required to consider the proposal and decide whether to draft legislation supporting the project.

The initiative reflects growing debate in Europe about the influence of global social media companies. Most of the world’s largest platforms are operated by companies based in the United States or China, and European policymakers have repeatedly criticised them over data protection, content moderation and broader social impacts.

Calls for a European alternative have intensified in recent years. The discussion gained momentum after Elon Musk purchased the social media platform X, formerly known as Twitter, in 2022. Since then, some European users have experimented with alternative platforms, although most have returned to larger networks because of their established user bases.

One example of a European-developed platform is Mastodon, which operates through a decentralised network of servers. Despite its presence in the market, it has not achieved the same level of global popularity as the largest social media services.

See also  Report Questions Evidence Behind AI Industry’s Climate Claims

Supporters of the new proposal argue that a European platform funded by society could offer a different model. According to the initiative’s description, the network would operate as a service designed for the public and would be overseen by society rather than private owners.

Campaign organisers say such a platform could remain independent from political pressure while protecting the rights of users and promoting fair treatment for all participants.

Even if the initiative succeeds in gathering the required signatures, many practical questions remain. It is unclear whether the project would involve building an entirely new platform or supporting existing services. The timeline for development is also uncertain because any new legislation would still need to pass through the EU’s lawmaking process.

If approved, the project would likely require a procurement process before development begins. This step alone could take significant time.

The cost of the proposed platform is another key issue. Organisers estimate that developing and operating the network could cost about one euro per citizen each year. Across the European Union, that would amount to roughly €450 million annually.

They argue that such a contribution would represent a small expense for individual citizens while providing Europe with a digital platform designed specifically for public interests. Whether EU institutions and member states would agree to fund such a project remains an open question.

Tech

Workplace Culture Driving Women Out of Tech Jobs in Europe, Report Finds

Published

on

Women are leaving technology jobs in Europe largely because of workplace culture, according to a new report that warns the gender gap in the sector could widen as artificial intelligence reshapes the industry.

The study by consulting firm McKinsey & Company found that women accounted for just 19 percent of employees in core technology roles across Europe in 2025, a decline of three percentage points from the previous year. The drop suggests that long-running efforts to improve gender representation in the industry have failed to deliver meaningful progress.

“Workplace culture is the biggest reason why women are leaving their tech jobs,” the report said, adding that the growing influence of artificial intelligence could deepen the divide if companies fail to address the problem.

“As AI reshapes roles and value creation in tech, existing gender gaps could widen without deliberate action,” the report warned.

The gender imbalance becomes even more visible as careers progress. Women’s participation in the technology workforce falls by as much as 18 percentage points before reaching management levels. As a result, women hold only 13 percent of management positions in tech companies and just 8 percent of executive or corporate leadership roles.

Researchers say the early loss of women from the talent pipeline contributes to the lack of representation in leadership positions.

The report also found that women tend to be concentrated in a limited range of roles that do not typically lead to senior leadership positions. Women represent 39 percent of employees in product management and 54 percent in design roles. However, these positions account for a relatively small share of the overall technology workforce and rarely lead to executive decision-making roles.

See also  AI Voice Scam Impersonates Top US Official, Raises Alarm Over Emerging Cyber Threats

In fast-growing fields such as artificial intelligence, data and analytics, men continue to dominate entry-level hiring. The report said this trend is especially concerning as AI expands across the sector.

Researchers warned that the imbalance could result in fewer perspectives shaping technologies that increasingly affect society. The report said this could create a “narrowing of perspectives at precisely the levels at which bias, accountability and societal impact must be addressed.”

The challenges facing women in tech persist even in countries known for strong gender equality policies. In Finland, women represent 36 percent of technology workers, while in Sweden the figure is 23 percent.

The study also examined why many women choose to leave the sector. Nearly half of those surveyed reported experiencing sexism or bias in the workplace over the past year. Around 82 percent said they felt pressure to prove themselves more than male colleagues.

Many respondents said they often felt isolated in their roles because they were the only woman on their team or in meetings.

The report also highlighted what it described as “office housework,” which includes tasks such as organising events or mediating team conflicts. Women are more likely to be asked to perform these duties and spend an average of 200 hours a year on them.

McKinsey said companies could address the gap by improving workplace culture, setting clear representation goals and strengthening mentorship programmes. The report also recommended investing in reskilling programmes to help women move into emerging AI roles as the technology workforce evolves.

Continue Reading

Tech

US Government Designates Anthropic a Supply Chain Risk, Military Contractors Reconsider Use of Claude

Published

on

The Trump administration has officially designated artificial intelligence company Anthropic as a supply chain risk, a move that could force government contractors to stop using its AI chatbot, Claude. The Pentagon said Thursday that it informed Anthropic leadership that the company and its products are now considered a supply chain threat, effective immediately.

The decision follows a standoff over Anthropic’s refusal to remove safety guardrails designed to prevent mass surveillance of Americans and the development of fully autonomous weapons. President Donald Trump and Defence Secretary Pete Hegseth had previously accused the company of endangering national security and threatened a series of penalties.

Anthropic CEO Dario Amodei responded that the designation is legally questionable and said the company plans to challenge it in court. He emphasised that the exceptions Claude enforces are limited to high-level use cases, not operational military decisions, and that prior discussions with the Pentagon had focused on maintaining access to Claude while establishing a smooth transition if required.

The Pentagon argued that restricting access to Claude could endanger warfighters. “The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk,” the department said. Trump has given the military six months to phase out the AI system, which is already embedded across multiple military and national security platforms.

Some defence contractors have already responded. Lockheed Martin said it will follow the Pentagon’s direction and seek other AI providers but does not anticipate major disruptions. Microsoft, whose lawyers studied the scope of the risk designation, said it can continue working with Anthropic on non-defence projects.

See also  Spanish Robotics Plant Boosts Defence Industry and Rural Economy

The move has drawn criticism from lawmakers and former officials. Senator Kirsten Gillibrand called the designation “a dangerous misuse of a tool meant to address adversary-controlled technology.” A letter signed by former defence and intelligence leaders, including former CIA director Michael Hayden, argued that applying supply chain rules to a domestic company is a “category error” and sets a troubling precedent. The letter stressed that such rules are meant to protect against foreign adversaries, not American innovators operating under the law.

Despite losing some defence contracts, Anthropic has seen a surge in consumer downloads over the past week, with more than a million people signing up for Claude daily. The app has surpassed OpenAI’s ChatGPT and Google’s Gemini in more than 20 countries’ Apple App Store rankings, reflecting public support for the company’s stance.

The dispute has also intensified Anthropic’s rivalry with OpenAI, whose CEO Sam Altman acknowledged that a recent military deal for ChatGPT in classified environments was rushed and required adjustments. Amodei expressed regret over an internal note he sent criticizing OpenAI and the Pentagon’s decision, apologizing for language that suggested the company was punished for not offering “dictator-like praise” to Trump.

The Pentagon’s designation of Anthropic as a supply chain risk marks an unprecedented escalation in the government’s effort to assert control over AI technologies used in national security, highlighting tensions between innovation, ethics, and military priorities.

Continue Reading

Tech

US Military Cancels Anthropic AI Contract, Turns to OpenAI for Advanced Operations

Published

on

The US military has ended its contract with Anthropic, the artificial intelligence company behind the Claude chatbot, after the firm refused to remove safety guardrails designed to prevent mass surveillance and autonomous weapon use. The Pentagon has now turned to OpenAI to integrate AI systems in classified operations.

Media reports have revealed that Anthropic’s Claude AI was previously used to support operations targeting leaders in Venezuela and Iran. The chatbot reportedly assisted in a January mission that led to the capture of Venezuelan President Nicolás Maduro and was later deployed during preparations for a planned operation related to Iran’s late supreme leader, Ayatollah Ali Khamenei.

Experts say these cases provide a rare look at how advanced AI is being incorporated into US military planning and intelligence. Heidy Khlaaf, chief AI scientist at the AI Now Institute, described the rapid deployment of these systems as surprising, noting that large language models are prone to producing unreliable or incorrect outputs, which raises concerns in high-stakes environments.

The reported use of Claude aligns with the Trump administration’s push to make the US military “AI-first,” aiming to ensure the United States maintains an edge over global rivals, including China. Various forms of automation and AI have been used by the US military since the 2010s, with previous deployments focusing on logistics, maintenance, and translation services, according to Elke Schwarz, professor of political theory at Queen Mary University of London.

The Pentagon’s AI Acceleration strategy seeks to integrate AI across multiple domains, including cyber and intelligence operations. As part of this effort, a database called genai.mil allows officials to access AI tools, including Google’s Gemini and xAI’s Grok. The 2025 defense budget, dubbed the “Big Beautiful Bill,” allocates hundreds of millions of dollars to AI-related projects, including counter-drone systems, AI ecosystem development, and nuclear security missions.

See also  New York City Sues Tech Giants Over Alleged Role in Youth Mental Health Crisis

While Anthropic’s $200 million partnership with the military was intended as a two-year prototype to advance national security and mitigate adversarial AI risks, the company’s refusal to remove guardrails meant the contract was canceled. Claude had been deployed across US government networks, including nuclear labs and intelligence analysis tasks.

The Department of War now faces the challenge of transitioning to OpenAI’s systems. Analysts say the intelligence gathered by Claude will likely remain in use and may be incorporated into new AI tools. Experts also warn that increasing reliance on AI in military operations could raise ethical concerns, particularly regarding the development of autonomous weapons that could select and engage targets without human oversight.

Giorgos Verdi, a policy fellow at the European Council on Foreign Relations, noted that while AI currently assists with tasks such as analyzing satellite imagery, the US military’s push toward fully autonomous systems could escalate conflicts if rival nations adopt similar technology.

The Pentagon is expected to continue experimenting with AI in operations while balancing effectiveness with ethical and legal constraints, marking a pivotal moment in the integration of artificial intelligence into modern warfare.

Continue Reading

Trending