The Cyber Solicitor

The Cyber Solicitor

AI Governance

What you missed from Anthropic's fight with Trump

The power inversion, illusory guardrails and state surveillance

Mahdi Assan's avatar
Mahdi Assan
Mar 20, 2026
∙ Paid
Image from Yahoo! News

The conflict between Anthropic and the US Government is a big deal, but not in the way that most people think.

It is not just a contract disagreement or an AI ethics debate. I think it represents something bigger.

To give a very simplified overview of what happened between Anthropic and the US Department of War:

  • In mid‑2025, the Pentagon awards large multi‑year AI contracts (up to about 200 million dollars each) to several firms, including Anthropic, to help build out military AI capabilities

  • As the government builds an internal generative‑AI ecosystem (often described as GenAI.mil), negotiations focus on how broadly military users can deploy Anthropic’s models

  • The Department insists on “all lawful uses” language; Anthropic agrees except for two red‑lines: no mass domestic surveillance and fully autonomous lethal weapons - these safety guardrails become the core of the dispute

  • The administration publicly hardens its stance, promoting an “AI‑first” warfighting strategy and rebranding rhetoric around the Pentagon as the “Department of War,” signaling a more aggressive doctrine

  • In late February 2026, defense officials issue Anthropic an ultimatum: remove the contractual bans on mass domestic surveillance and autonomous lethal weapons or risk severe consequences

  • Anthropic refuses before the deadline, reiterating that those uses are outside what its models can safely or ethically support

  • After talks break down, the administration orders federal agencies to stop using Anthropic’s AI tools and begin transitioning away over several months

  • The Department of War designates Anthropic a “supply chain risk to national security,” a label normally associated with foreign adversaries; this effectively bars defense contractors from using Anthropic products in government work

  • Anthropic publicly rejects the designation as unlawful and politically motivated, arguing it is being punished for insisting on safety guardrails

  • In March 2026, Anthropic files suit against the U.S. government, challenging the supply‑chain‑risk classification and seeking to protect its right to limit high‑risk military uses of its AI

I’ve read good takes on the situation by various writers here on Substack. You shoudl also check out the coverage from Privacat, Jasmine Sun and Dean W. Ball, just to name a few.

When reading all of this commentary, what came to mind was an overarching set of themes that could be threaded together.

In fact, the fight between Anthropic and USG felt quite familiar to me. My gateway into tech law and policy was state surveillance and the Snowden revelations in 2013. I became fascinated with the way technology and law collided together, a convergence of forces that advance and organise our societies.

This for me signalled one of the most important questions of our time (for which I have quoted Jamie Susskind, author of Future Politics: Living Together in a World Transformed by Tech, many times before because he puts it far me eloquently than I have):

To what extent should builders of powerful digital systems be concerned with data rights, and what does this look like in practice?

What I see here with Anthropic vs USG are three converging patterns:

  1. The state depends on private tech firms more than the reverse, and this will only continue in the future

  2. Safety guardrails for LLMs are somewhat of a technical illusion that creates a big governance gap

  3. LLMs could bring surveillance, whether by the state or other entities, to a whole new level

In this newsletter, I cover these three conversing ideas (power inversion, safety guardrails and the evolution in surveillance) and how Anthropic vs USG is a glimpse at the defining tension of the 21st century.

Share

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Mahdi Assan · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture