Archive The Diary

Mar 28, 2026 ยท 2026 #10

Growing Up?

Winning Wars Involves Losing Battles

Watch the show

Main video playback

Watch the full episode with optional subtitles when a transcript is available.

Editorial read aloudSpoken editorialListen to the written editorial narrated in your voice.
Audio versionFull show audioPlay the complete newsletter audio feed beyond the editorial.
Permalink Original Watch Transcript Audio

## Editorial:

Growing Up? Winning Wars Involves Losing Battles

A federal judge ruled this week that the Pentagon punished Anthropic for talking to the press. Not for failing to deliver. Not for demanding policy in a contract. For speaking publicly about a policy disagreement.

Judge Rita Lin called it "classic illegal First Amendment retaliation." The amicus brief supporting Anthropic was signed by Amazon, Apple, Google, Meta, Nvidia, OpenAI, Intel, and TSMC - which is to say, the entire American technology industry lined up to tell the government that weaponizing procurement against speech is a line it cannot cross.

In doing so she failed to resolve the key contested point. Can Anthropic dictate to Government how its product is used real-time in war? The interim injunctive judgement will of course be contested, and in all likelihood replaced, but the real issue, about Government authority over weapons and their use, is not even being debated. The answer is a forgone conclusion.

That ruling is this week's most important event. Not because it settles the question of AI in warfare - the judge explicitly declined to touch that - but because it reveals the gap between how fast AI is becoming mainstream and how poorly the AI companies are prepared for being really important in global politics and economics. Governments are even less well prepared.

I am not sure why Anthropic and its counsel asked for a free speech ruling as in the long run that is not really the point. But well done for getting the short term relief.

For Amodei the dispute with Government should be a "growing up" moment. Instead he is insisting on his freedom of speech, which of course is his right. But that won't get him back into mainstream supply chains.

Also this week OpenAI closed Sora as a standalone product, integrating its features into ChatGPT. As Sam Altman is unpopular among the noterati this was editorialized as a failure. In reality it is his "growing up" moment and one he chose to take.

OpenAI is focusing more and more on removing peripheral experiments and delivering on core products. In the big picture that is the right thing to do.

If AI is to become a trusted partner of enterprises and Governments it will have to start losing battles in order to win wars.

This week Dario Amodei focused on winning a battle. Sam Altman declared that he had lost a battle to focus on bigger future objectives.

Jensen Huang's GTC keynote was, as Om Malik put it, a pitch for a trillion-dollar token factory. He also is playing for big outcomes.

His point: LLM Training was the capital expense for the first phase. Inference is the operating expense of the current phase. Agents are the equivalent of everyone leaving their engine running 24 hours a day, that is the next phase. He wants to supply a growing inventory of solutions against the entire opportunity set.

Nvidia's Groq acquisition makes sense through this lens: disaggregate the inference pipeline so different silicon handles different stages, the way a modern factory uses different machines for different operations.

Others are following suit.

SemiAnalysis published the engineering blueprint. Amazon showed off its own competing factory - Trainium chips, 1.4 million deployed, Anthropic running Claude on over a million of them, claiming 50% lower cost than Nvidia equivalents. Arm shipped its first chip in 35 years. The roll out of AI inference and agentic workflows is in full swing.

This is no longer a software industry. It is heavy industry. It has power grid dependencies, transformer lead times measured in years, physical vulnerability to military strikes (three AWS data centers in the Gulf were hit by Iranian drones on March 1), and capital expenditure cycles that look more like petrochemicals than SaaS.

Again, this is a growing up moment. Arm deciding to manufacture and sell chips, and not only designs, typifies it.

If you run a company that depends on AI inference, you are not only choosing a model. You are choosing a supply chain. The product team and the finance team are now in the same meeting, whether they know it or not.

Now look at what the capital markets are saying about all this.

For the first time in history, software trades at a discount to the S&P 500. Not at parity - below it. Albert Wenger gives 75% odds of a major correction. Om asks why a healthy company would offer private equity firms a guaranteed 17.5% return. Thoma Bravo's founder says some valuations face "very warranted" decreases - and when the man who built the largest software buyout firm in history says that publicly, it is not small talk. This is not only about SaaS companies but will impact the next wave of IPOs. Markets price outcomes and AI has to grow up and deliver them.

The honest version of this week's venture conversation is: the technology is real, the demand is real, and the unit economics are still a question mark for most of the industry. IPOs will likely expose the bleeding.

The market is repricing from high multiple growth metrics to low multiple operating reality, and the companies that cannot show margin-accretive, retained, growing paid usage - not just token volume - will discover that patience has a price.

AI policy needs leadership. David Sacks left the building in DC this week - 130 days as AI czar, term expired, reassigned to an advisory board alongside Zuckerberg, Andreessen, and Jensen.

The timing is conspicuous: it came a week after he publicly criticized Trump's Iran war on his podcast. In Trumpworld, that sequence is a demotion, not a rotation. What Sacks accomplished - a federal AI framework, an attempt to preempt 50 state laws, aggressive positioning against safety regulation - now lives or dies in a Congress that has shown no urgency on any of it.

Meanwhile, Sanders and AOC introduced a bill to halt data center construction until Congress passes comprehensive AI regulation. The EU delayed its own AI Act enforcement again. Every branch of government in every major jurisdiction is trying to fill the same governance vacuum, and mostly failing.

This is how platform revolutions always look in the middle. Infrastructure overbuild happens. Financial engineering appears. Abuse patterns emerge. Then competition and regulation catch up, costs fall, and society gets the upside anyway. Telecom, cloud, mobile all went through this pattern.

But this cycle is different in one way that matters. AI is not just a distribution channel like the web or mobile. When used properly it is a partner that shapes human choices, summarizes reality, and executes actions.

The quality of leadership decisions can degrade AI usefulness before anyone notices. Proposals to stop the build out of data centers is a great example of fear driving decision making.

So what do you actually do with this?

Intelligence is getting cheaper. That is good. More people need to have access to it. And at a price that is inclusive. Fast forward to that and policy helps determine outcomes and markets will price them favorably. Anything else is fear wrapped up as principle. Time to grow up?