Government Meets AI: What the US-Anthropic Dispute Tells Us
Key contact
Governments around the world are seeking to rapidly integrate artificial intelligence (“AI”) into public services, defence, and administrative functions. As AI capabilities expand, questions about governance, ethical boundaries, and the terms on which the public sector engages with private technology providers are expanding too. A recent dispute between the US government and AI developer Anthropic has brought these issues into sharp focus and provided valuable lessons for policymakers and businesses globally, including in the United Kingdom.
Background
In February, President Trump has ordered all federal agencies immediately to stop using technology developed by Anthropic, one of the leading AI companies in the United States. This announcement marked a significant escalation in the dispute between the AI developer and the Department of Defense regarding the permissible uses of AI technology.
The conflict appears to centre on Anthropic's refusal to grant the US military unrestricted access to its AI tools. The company has expressed concerns about its technology being used for "mass surveillance" and "fully autonomous weapons".
What This Means
According to President Trump's directive, Anthropic's tools will be phased out of all government work within the next six months. The company has stated that the impact on Anthropic's commercial customers will be limited to those organisations that also contract with the military. These companies may need to discontinue their use of Anthropic products for defence-related work.
The UK Context
The UK government has been actively encouraging the adoption of AI across the public sector. The National AI Strategy and subsequent policy frameworks have positioned the United Kingdom as a leader in the development of responsible AI. However, UK public bodies’ increasing reliance on AI tools from major international providers is raising similar questions about usage terms, data governance, and ethical boundaries.
Although the UK regulatory environment differs from the US approach, the Anthropic dispute highlights risks that are relevant to British organisations. Government departments and contractors using AI tools must consider whether their agreements adequately address usage restrictions, and what contingencies exist should a key technology provider become unavailable due to commercial, regulatory, or geopolitical factors.
Looking Ahead
For businesses operating in the defence and government contracting space, in the US, the UK, and beyond, these developments underscore the importance of monitoring supply chain designations and understanding the contractual and regulatory risks associated with technology partnerships. Organisations should review their AI procurement arrangements, assess the risk of concentration where critical functions depend on a single provider, and ensure that their contractual terms clearly define permissible uses and transition arrangements.
In the UK, the evolving situation in the US may also inform domestic policy debates about AI governance, particularly as the government considers how to balance innovation with safeguards for sensitive applications. The outcome of Anthropic's anticipated legal challenge will be closely monitored by industry participants, policymakers, and legal commentators on both sides of the Atlantic.