The Leaked Bid: Anthropic’s Hidden History with Drone Swarm Warfare
·AI Topics

The Leaked Bid: Anthropic’s Hidden History with Drone Swarm Warfare

Leaked reports reveal Anthropic bid for a $100M voice-controlled drone project earlier this year, complicating its image as a 'purely ethical' lab.

Even as Anthropic is being hailed as the "ethical sanctuary" of AI after its clash with the Pentagon, new leaks have surfaced that complicate the company’s polished image. Reports have emerged that earlier this year, Anthropic actually bid on a $100 million Pentagon project focused on voice-controlled drone swarms.

Where: The Pentagon / Leaked internal documents

The leak originated from a group calling itself "Dissenting Constitutionalists," who claim to be former Anthropic employees concerned with the "true nature" of the company’s defense aspirations.

Why: Ethical Defense or Missed Opportunity?

The leak comes at a time when Anthropic’s public identity is entirely built on its resistance to the military. This report suggests that the "line-in-the-sand" may be more about control and contract language than a fundamental refusal of all military applications.

What: The $100M Drone Swarm Bid

The project, code-named "Project: Legion," sought to use voice-controlled LLMs to coordinate hundreds of autonomous drones on a battlefield. Anthropic reportedly bid for the contract but was ultimately passed over in favor of a joint bid from xAI and SpaceX.

Description: The "Ethical" Defense Bid

Anthropic has not denied the bid. Instead, they claim the proposal was "fully ethical" because it mandated a "meaningful human in the loop" at every stage of engagement. They argue that their bid was designed to show the Pentagon that drone swarms could be built without full autonomy—a proposal the DoD apparently found too restrictive.

Analysis: The Complicated Reality of AI Safety

This leak shows that the "principled stance" Anthropic took today may have been a last-ditch effort after being outproduced and outbid by more aggressive competitors like Elon Musk’s xAI.

Points of Tension:

  1. Moral Consistency: Can Anthropic truly be the "safety lab" if it was willing to build the voice-commands for swarms of lethal machines?
  2. xAI's Gain: By being willing to go "all the way" with the Pentagon, xAI and SpaceX have secured a level of state patronage that Anthropic clearly wanted but couldn't get on its own terms.
  3. The 'Constitutional' Loophole: This suggests that Anthropic’s "Constitution" isn't a ban on war—it’s a set of rules for how war should be fought with AI.

Future Outlook: The Fragile Underdog

Anthropic's brand is currently its most valuable asset. If more leaks suggest their "ethics" are actually "contractual disagreements," the goodwill from the #UninstallChatGPT movement could evaporate as quickly as it arrived.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn