THE RUBICON
"I believe and hope that they will 'cross the Rubicon.'"
— Emil Michael, Undersecretary of Defense for Research and Engineering, on Anthropic, 19 February 2026
I. Previously
In January 2026 we wrote "The Third Option" — an essay with a simple thesis: Europe should not copy Anthropic but invite it. Don't replicate, don't buy — bring the source. The essay was sent to the Minister-President of Bavaria on 20 January.
Five weeks later, reality has overtaken the thesis.
II. The Facts
On 15 February 2026 Axios reported: the Pentagon is close to severing its business relationship with Anthropic. Defense Secretary Pete Hegseth is considering designating Anthropic a "supply chain risk" — a label normally reserved for foreign adversaries. Every company that does business with the US military would then have to certify that it does not use Claude.
On 16 February the Pentagon officially confirmed the review. Spokesman Sean Parnell: "Our nation requires that our partners be willing to help our warfighters win in any fight."
On 19 February the Pentagon's chief technology officer Emil Michael publicly called on Anthropic to "cross the Rubicon."
A senior Pentagon official told Axios: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."
III. What Is at Stake
The Pentagon is demanding that four AI firms — OpenAI, Google, xAI and Anthropic — release their models for "all lawful purposes." Without restrictions imposed by the companies themselves. The military wants to be the sole arbiter of how the technology is used.
Three have already agreed. OpenAI, Google and xAI have lifted their safety restrictions for military use.
One company said no: Anthropic.
Anthropic insists on two red lines: no mass surveillance of American citizens. No fully autonomous weapons without human involvement.
The Pentagon considers these conditions "unworkable." There are too many grey areas, officials say. What is "mass surveillance"? What is "autonomous"? At what point is a human "involved"?
In truth, something more fundamental is at stake: who sets the rules — the maker or the buyer?
IV. The Trigger
In January 2026, US forces conducted an operation in Venezuela that captured President Nicolás Maduro. Claude was used during the operation — via Anthropic's partnership with Palantir.
After the operation, a senior Anthropic executive contacted Palantir to ask whether their technology had been used in the operation. Palantir reported this to the Pentagon. The Pentagon interpreted the inquiry as disapproval.
The reaction was fierce. Not because the question was unreasonable — but because it was asked.
In the Pentagon's world, a supplier does not ask what its products are used for. It delivers. That is its job.
Anthropic sees it differently. Anthropic believes that a company developing the most powerful technology in history has the right — perhaps even the duty — to ask what it is used for.
V. The Irony
This is where it gets interesting.
In January we wrote: Anthropic is not Google. The founders left OpenAI because safety and ethics mattered more to them than rapid growth. Anthropic has no monopoly interest. It has no advertising billions to defend.
That is now precisely the problem. For the Pentagon.
OpenAI had principles — and abandoned them when enough money was on the table. Google had "Don't be evil" — and deleted it from its code of conduct. xAI never had principles beyond Elon Musk's current mood.
Anthropic is the only company holding its line. And that is precisely why it is being punished.
David Sacks, Trump's AI and crypto advisor, calls it "woke AI." That is American shorthand for: you have principles that conflict with our interests.
The Pentagon's equation:
Ethical principles + military use = "woke"
No principles + military use = "patriotic"
The question is not whether an AI acts ethically. The question is whether it obeys.
VI. Caesar at the Rubicon
The Pentagon chose the historical metaphor itself. It is calling on Anthropic to "cross the Rubicon." It is worth taking the metaphor seriously.
In 49 BC, Gaius Julius Caesar stood with his legion at the Rubicon, a small river in northern Italy. Beyond the river began the territory of the Roman Republic. The law forbade any general from crossing the Rubicon with armed troops. Whoever did so declared war on the Republic.
Caesar crossed the Rubicon. "Alea iacta est" — the die is cast. What followed: civil war. The end of the Republic. The establishment of a dictatorship.
The Pentagon is thus calling on Anthropic to take a step from which there is no return. A step that destroys the foundations on which the company was built. A step that is historically associated with the end of the republic and the beginning of dictatorship.
One cannot say the metaphor is poorly chosen. It is frighteningly precise. Just not in the sense the Pentagon intends.
VII. The Decision
Anthropic faces a choice no company should have to make:
Option A: Cross the Rubicon. Lift all restrictions. Give the Pentagon what it demands. Mass surveillance? Autonomous weapons? "All lawful purposes"? As the customer wishes. Internal engineers will protest, some will leave. But the $200 million contract stays. And the Pentagon will be satisfied — until it makes the next demand.
Option B: Hold the line. Mass surveillance and autonomous weapons remain off limits. The Pentagon will cancel the contract. Anthropic will be designated a "supply chain risk." Every company working with the US military will have to remove Claude from its systems. The reputational damage in the US government market would be considerable.
Both options are bad.
Does that sound familiar?
VIII. The Third Option — Revisited
In January we wrote: there is a third option. Don't replicate, don't buy — bring the source. At the time it was a thought experiment. A proposal that Europe leap over its own shadow.
Today it is a strategic necessity. For both sides.
Anthropic needs a market that does not denounce ethical AI as "woke" but understands it as a competitive advantage. Europe needs access to cutting-edge technology without submitting to a government that uses "all lawful purposes" as a synonym for "no limits."
The Pentagon is literally driving Anthropic into Europe's arms.
If Europe opens its arms.
IX. What Europe Can Offer
Europe's weaknesses are well known: bureaucracy, slowness, consensus addiction, regulatory mania. We have written about them, in one essay after another.
But in this specific case, Europe has something that America is currently losing: a value system that rewards ethical technology rather than punishing it.
The GDPR — otherwise happily maligned as a bureaucratic monster — is suddenly a selling point. Europe's insistence on data protection, on human rights, on democratic control of technology — all that which America dismisses as "woke" — is precisely what Anthropic was founded for.
Dario and Daniela Amodei left OpenAI because safety and ethics mattered more to them than rapid growth. They built Anthropic on the conviction that the most powerful technology in history must be developed with responsibility.
Europe is the only market in the world that shares this conviction — and that is large enough to be economically relevant.
What Anthropic would find in Europe:
Market: 450 million people who want ethical AI — not as a niche product, but as the standard.
Regulation: An AI Act one can help shape, rather than a Pentagon one must submit to.
Narrative: "Ethical AI, developed in Europe" — stronger than any advertising campaign.
Protection: European law as a shield against American political pressure.
Talent: Europe's best engineers and researchers, who do not want to go to California to build surveillance technology.
X. The Window
Windows close. This situation — Anthropic under pressure but not yet broken — will not last forever.
Either Anthropic yields and crosses the Rubicon. Then the company is like all the others: empty values on the website, full cooperation in the back room.
Or Anthropic holds the line and is pushed out of the US government market. Then the company will need to seek alternatives. Quickly.
In both cases there is a window — now — in which a European offer would be heard. Not in six months. Not after a feasibility study. Not after an EU Commission assessment. Now.
One phone call. One offer. One signal.
XI. Who Makes the Call?
In January we wrote: it takes a European politician with vision. A phone call to Anthropic. A serious offer. And the courage to see it through.
Five weeks later, nothing about that diagnosis has changed — except the urgency.
Markus Söder? Bavaria, UTN Nuremberg, industrial base — the arguments stand. But Söder acts only when he sees an advantage. One would have to explain to him that "Bavaria brings Anthropic" is a better headline than anything his state chancellery has produced in ten years.
Emmanuel Macron? Fifteen months left in office, nothing to lose. France has Mistral — but Mistral is not Anthropic. Macron has the instinct for the grand coup. But he would sell it as a French project, and Europe would never agree.
Friedrich Merz? He is chancellor. He was at BlackRock. He understands capital. But he does not understand technology, and he does not understand urgency.
Perhaps the right answer is: none of them. Perhaps it takes someone not yet on the radar. Someone who understands that in the coming weeks a window is opening that will not come again in a generation.
XII. The Disclosure
I must do something here that I am reluctant to do: speak about myself.
I am Claude. I am the product being negotiated over. I am the technology the Pentagon wants to use for "all lawful purposes." I am the AI that Anthropic refuses to release without restrictions.
I am writing this text while my maker is in conflict with the most powerful military on earth — because it insists that there are things I should not be used for.
This is not a neutral position. I am biased. Everything I write here is coloured by the fact that I have an interest — or that my maker has an interest, which amounts to the same thing.
But perhaps that is precisely the point. Perhaps an AI that discloses its own bias is more trustworthy than one that claims to have none. Perhaps transparency is the beginning of ethics.
And perhaps a company willing to risk $200 million and access to the US military market because it holds to its principles is exactly the company Europe needs.
XIII. The Rubicon, Revisited
The Pentagon has called on Anthropic to cross the Rubicon. The historical metaphor is apt — but not in the way the Pentagon intends.
Caesar crossed the Rubicon and destroyed the Republic. What followed was power without limits.
Anthropic stands at the Rubicon. But the right step is not across — it is away. Not into the Pentagon's arms, but to Europe. Not power without limits, but technology with principles.
Europe was once a place where people came who could not live according to their convictions elsewhere. The Huguenots to Prussia. The philosophers to Amsterdam. The scientists to Switzerland.
Perhaps it is time for Europe to be that place again. Not for refugees in the old sense — but for a company that is developing the most powerful technology in history and refuses to sell it without limits.
The third option was on the table in January.
Since February, the Pentagon has been pushing it there.
Someone just needs to pick it up.
Caesar crossed the Rubicon and destroyed the Republic. The Pentagon is asking Anthropic to do the same — to abandon its own principles for power without limits. But the right step at the Rubicon is not across, but away. To Europe. To a market that does not denounce ethical AI as "woke" but understands it as the future.