Pentagon Terminates $200M Anthropic Contract Over Surveillance, Autonomous Weapons Restrictions

The U.S. Department of Defense has officially ended its $200 million contract with AI company Anthropic and ordered military contractors to cease use of Anthropic products, citing disputes over permissible uses of the technology.

Key Facts:

  • Dispute scope: Anthropic had contractually prohibited the Pentagon from using its AI for mass surveillance of U.S. civilians or development of fully autonomous weapons systems.
  • Contract date: The agreement was signed in 2025.
  • Enforcement action: The military has now issued directives to all contractors requiring cessation of Anthropic product use.
  • Anthropic's position: The company explicitly stipulated operational restrictions from contract inception—not a recent policy change.

The friction: The Pentagon apparently sought capabilities Anthropic's terms of service explicitly forbade. Rather than negotiate revised restrictions, the government terminated the relationship.

Broader implication: The EFF frames this as a governance problem—that critical privacy protections depend on individual corporate decisions rather than statutory law. When companies can unilaterally restrict government use, leverage becomes bidirectional. The Pentagon's move suggests the department prioritizes operational freedom over contractual partnerships with ethical guardrails.

Status: Developing. The EFF article appears incomplete (abruptly cut mid-sentence), and no DOD statement has been independently confirmed by this outlet.

Source: EFF Deeplinks