Contents Archive

That Was The Week Diary

Feb 28, 2026 ยท 2026 #6 Editorial

Anthropic is Wrong

Public Power Should Not Be Privatized

Watch the show

Main video playback

Watch the full episode with optional subtitles when a transcript is available.

Editorial read aloudSpoken editorialListen to the written editorial narrated in your voice.
Audio versionFull show audioPlay the complete newsletter audio feed beyond the editorial.
Read Original Watch Transcript Audio

Anthropic is Wrong

Before you read the next paragraph, I want to say I m against mass (or even limited) surveillance, and I would need a lot of persuading that autonomous weapons can be relied upon in a situation of conflict.

That said, Anthropic is making an error by trying to set use-policy in customer contracts; in a democracy, sellers build lawful products and the law governs lawful use. Anthropic's role is to make products, the law (the people ultimately) governs their lawful use.

AI is on fire. OpenRouter reports roughly 1 trillion tokens a day. OpenAI reportedly raised $110 billion. Nvidia printed a $68 billion quarter. Agent deployment is moving from demo to operating reality. This is no longer a nascent market.

And right in the middle of that acceleration, we got the public standoff between Anthropic and the Department of War. Dario Amodei's statement drew boundaries around some military and surveillance use cases. I understand the instinct. I agree with the concern. But I still think Anthropic is wrong to insist on limiting Government use of its products.

Amodei wrote:

"we cannot in good conscience accede to their request."

This was in response to a customer (the US Government) asking that Anthropic be used for "any lawful use".

By making this judgement call Amodei is acting as a substitute legislature, determining allowable and impermissible uses, as if there were not already laws governing legal use.

The simplest way to see it is with ordinary equivalents.

A steel producer sells steel that can become a hospital beam or a tank hull. The producer does not write national deployment doctrine.

A cloud provider sells compute to banks, game studios, drug researchers, and defense agencies. It enforces law and contract, but it does not get final authority over state policy.

A pickup-truck maker does not pre-approve every lawful destination for every buyer.

A payments network processes both everyday groceries and politically controversial transactions. It is regulated as infrastructure, not licensed as a moral parliament.

Encryption software can protect dissidents and protect criminals. The answer was never "let the vendor decide social legitimacy." The answer was law, courts, warrants, and due process.

A hammer can be bought without a promise not to hurt somebody with it.

Of course, AI models are more powerful than hammers, but they are still products sold into institutions that already have constitutional authority, procurement law, and judicial oversight. If those institutions are weak, the cure is to strengthen them, not to transfer policy sovereignty to a private vendor.

This is where I think Anthropic is overreaching. Not because surveillance is good or weapon safety is unimportant, but because role boundaries are important.

Vendors should absolutely control model quality, abuse prevention, identity controls, logging, access tiers, and takedowns for unlawful behavior. Interestingly Anthropic weakend its safety rules this week, citing competitive pressures.

Vendors should not be deciding, by private term sheet, which lawful state functions are morally admissible.

Then there is politics. In a democracy, politics belongs to elected institutions, public law, and courts. The people get to elect them or throw them out.

Anthropic's strongest counterargument is obvious: "Frontier models are not normal products. They are dual-use capabilities with strategic risk, so labs must set red lines themselves."

I take that seriously. But once they sell a product the customer gets to use it, legally, in the way they choose. If the product is dangerous it probably should not be released at all.

If corporations set allowable use you get a private export-control regime run by corporate policy teams and PR pressure. That is unstable, non-transparent, and easy to politicize.

You also get selective enforcement. One lab forbids a use, another quietly allows it, an offshore provider ignores it, and the practical result is not safer deployment. The practical result is regulatory arbitrage plus weaker accountability.

This is why this weeks distillation storyline matters. Anthropic publicly accused Chinese labs, specifically Deepseek, of industrial-scale distillation. Technical analysis this week argued about how much that really moves frontier performance. Either way, one lesson is hard to miss: capabilities diffuse. If capabilities diffuse, private moral line-drawing at one vendor is not a durable safety architecture. The US Government, in this case, can use another vendor.

The only durable architecture is public-rule architecture.

Use courts to police misuse and abuse. Not sales contracts.

And then let vendors compete on reliability, cost, latency, and trustworthiness inside that legal frame.

The same principle applies to the other big debate this week. Citrini's "2028 Global Intelligence Crisis" asks what happens if white-collar disruption outpaces labor-market and policy adaptation. I do not read it as prophecy. I read it as a panic. Noah Smith, Zvi, and others are right to challenge frictionless collapse narratives. Economies do adapt. But families and local labor markets do not adapt on the same clock as model releases.

This is where the Anthropic question and the Citrini question converge. Both are about who governs the transition. Anthropic's answer is: we do, through contract terms. Citrini's implicit answer is: nobody does, and everything breaks. Neither is right. If AI causes a sharp labor transition, which I think it will, the answer is wage policy, retraining, tax design, credential reform, and transition support. Public instruments for a public problem. Not vendor stipulations. Not panic.

For investors, this implies a hard filter. Do not confuse access to model APIs with durable advantage. Durable advantage comes from integration quality, workflow trust, compliance readiness, and operational discipline. Companies have to be good suppliers to willing and legal customers, not lawmakers.

This week's startup of the week, Column, is a good example of boring, compounding infrastructure execution in a noisy cycle. So is every company that reduces error cost and decision latency in real operations.

For builders, Bill Gurley's 'don't play it safe' advice still stands, but in a different way. The focus zone now is the human-agent boundary: that requires taking risks and doing things in new ways, with large individual and societal rewards for success.

So my position is simple.

Anthropic is right to care about harm.

Anthropic is wrong to blur product stewardship into private rule-making for lawful public use.

If we want legitimate AI governance, we should govern AI where legitimacy lives: in democracy and the rule of law, not in vendor stipulations prior to customer purchases.

Newer Older