Workers at Google DeepMind in Britain have started a union drive to push back against the company’s expanding role in defense-related artificial intelligence projects. The move is a major moment for the tech sector, where staff are increasingly raising ethical concerns about supplying advanced AI tools to governments and military agencies.
The effort went public on May 5 after employees at the London AI research hub formally asked management to recognize the Communication Workers Union (CWU) and Unite the Union as representatives for staff. Those involved say organizing collectively will give them more say in how artificial intelligence is used in combat, surveillance, and defense work.
The union push gathered steam after Google’s recent deal with the U.S. Department of Defense. Under the arrangement, the Pentagon can reportedly run Google’s AI systems inside classified military networks for what officials called “lawful government purposes.”
Many staff pushed back on the deal, arguing that AI built for scientific progress should not be folded into military programs or combat activities. Employees worry the technology might end up supporting surveillance, weapons, or targeting systems.
The dispute comes as the U.S. government deepens ties with top AI firms to grow military AI capabilities. Those partnerships have fueled global debate about responsibility, openness, and the risk of fast-advancing AI being misapplied.
The issues at Google are part of a larger clash between AI companies and defense bodies. Earlier this year, reports said the Trump administration told federal agencies to stop using Anthropic’s tech after the firm refused to strip out safeguards that restrict military uses of its AI.
Anthropic had warned that open-ended use of its systems could aid autonomous weapons or mass surveillance. Those concerns reportedly came up after the company’s tools were linked to U.S. operations involving Venezuela and former President Nicolás Maduro.
While Anthropic first pushed back on Pentagon requests, later talks between company leaders and the White House suggested efforts to keep working with government agencies, especially around its advanced AI model called Mythos.
Right now, there’s little public detail on whether Google’s own Pentagon deal places limits on how defense officials can use its AI tools.
Staff pushback on defense projects isn’t new at Google. In 2018, the company joined a contested Pentagon program that used AI to analyze drone surveillance footage.
That project drew heavy internal criticism. Thousands of employees signed protest letters and petitions, and some quit over the contract. The backlash eventually led Google to exit the deal and avoid similar military work for several years.
Workers now feel the company’s position has shifted. Earlier this year, reports said about 600 employees signed another petition against renewed defense ties and raised questions about how the AI might be deployed. Unlike the earlier case, the company did not change course.
Instead, Google stood by its role, saying it was proud to offer AI infrastructure and services for national security. Many staff concluded that petitions and internal complaints no longer move leadership, prompting them to turn to formal union organizing.
The DeepMind employee drive is being called a historic step because it’s one of the first big union efforts inside a top AI lab directly tied to opposition to defense contracts.
Supporters think it could spark wider labor action across tech, especially among workers worried about the ethical path of AI development.
CWU national tech workers officer John Chadfield said the move is a key moment for tech staff. He noted that workers are showing support for people impacted by armed conflict while seeking more control over how their skills are used.
Chadfield added that organizing together gives employees more power to challenge defense partnerships and press tech firms toward stronger ethical standards for advanced AI.
In addition to forming a union, DeepMind staff in multiple countries are reportedly weighing other collective actions. Options include protests and “research strikes,” where workers decline to join projects connected to military use.
Campaign backers say AI researchers should have a real role in deciding whether the tools they build are used in war, surveillance, or other disputed areas.
CWU General Secretary Dave Ward also supported the effort, calling it historically important. He warned that AI made without proper accountability and public oversight could cause broad harm.
Ward added that workers seeking more influence over their work continue a long tradition of defending workplace rights, ethical norms, and social duty.
The campaign has also linked to wider criticism of how advanced tech is used in war zones, especially regarding Israel’s military operations in Gaza. Some workers in the movement have urged Google to stop supplying tools that could help military actions tied to civilian harm or surveillance.
Labor advocates believe the DeepMind union effort could serve as a model for future organizing across the global tech industry as AI becomes more tied to military and political strategies.
The case underscores rising friction between company deals with governments and staff calls for ethical checks in the growing AI field.
The broader conversation has also highlighted the role of press freedom and independent reporting. Supporters of independent media argue that coverage of labor actions, defense deals, and AI ethics is vital during times of rising global and political tension.
Advocates say investigative journalism is crucial for showing how new technologies are used by states and corporations, especially in areas tied to warfare, surveillance, labor rights, and civil liberties.
The DeepMind organizing effort sits within a much larger discussion connecting AI to labor rights, environmental costs, income gaps, education policy, and military growth.
Recent debates have looked at the environmental footprint of AI infrastructure, including heavy energy and water demands, while others have questioned rising automation in classrooms and jobs.
Other labor discussions have covered union activity, economic struggles in conflict areas, worker protections, and the growing closeness between powerful tech firms and government bodies.
As artificial intelligence reaches into nearly every part of modern life, workers, researchers, activists, and policymakers are increasingly asking who should guide these tools and what ethical rules should shape their future design and use.