Archive The Diary

Oct 12, 2025 · 2025 #38

Come to Daddy: OpenAI Wants Your Attention

Watch the show

Main video playback

Watch the full episode with optional subtitles when a transcript is available.

Editorial read aloudSpoken editorialListen to the written editorial narrated in your voice.
Audio versionFull show audioPlay the complete newsletter audio feed beyond the editorial.
Permalink Original Watch Audio

Come to Daddy: OpenAI Wants Your Attention

What happens when one company starts acting like the central bank of compute, the front door to intent, and the rules committee for creative rights - all at once? This week, OpenAI didn't just ask for our attention. It asked for our grids, our balance sheets, and our defaults. And for the most part it was welcomed by developers and users.

Industrial policy by private contract is here - and it's wearing an OpenAI hoodie

The "AI Inc" web (coined by the FT below) tightened: a 6‑gigawatt AMD pact with a warrant that could hand OpenAI up to 10% of AMD, a Nvidia LOI to deploy 10 GW with "up to $100B" in staged investment, a reported $300B compute purchase from Oracle, and Stargate's U.S. campus roll‑out toward 10 GW, with Abilene, TX as flagship. This is not M&A; it's incentive‑wiring. Capacity becomes destiny because it is all about energy and compute, and financing makes it possible.

"The next leap forward - deploying 10 gigawatts to power the next era of intelligence." - Jensen Huang

Here's the tension: Google's token throughput hit 1.3 quadrillion/month, yet growth decelerated (+250T → +107T added). Meanwhile Citi pegs AI infrastructure at $2.8T by 2029 and the FT warns the capex endgame is in sight. The Perez‑ian retort - "we never know if we've built enough until we've built too much" - may be right societally, but circular financing across chips, clouds, and labs raises single‑point‑of‑failure risk if any node blinks. OpenAI may lift all boats. Nvidia says it is a demand led expansion, and that is right.

The front door is migrating to agents and creation - and rights are becoming rails

OpenAI is turning chat into an action layer with Spotify and Zillow. Sora moved from demo to economics, promising more granular character controls and a new payout scheme:

"We are going to have to somehow make money for video generation." - Sam Altman, Sora update 1

That line matters. If the answer layer keeps users in‑product, then licensing, attribution, and settlement must follow the answer - not the click. Hollywood is already behind on AI video; Apple's Lakers broadcasts in Vision Pro show how quickly formats move when the medium changes. Rights that travel with assets and transparent revenue‑sharing are now product features, not legal footnotes.

Here is my video - accepting the UEFA Champions League as Captain of Manchester United. Yes, that is me.

Trust resets are happening in public - because defaults are policy

CBS installs Bari Weiss and publishes core values; MSNBC releases standards. That's not just media gossip - it's a response to the same power shift: when assistants synthesize the news and platforms mediate action, institutions must re‑articulate what they stand for or lose the edge to the interface.

On the build side, Reflection raised $2B to be an open frontier lab, challenging closed incumbents and DeepSeek alike.

But mostly we have over-reactions. Sangeet Paul Choudary's warning hangs over all of it: hype is capital allocation, and agentic systems atop data‑hoarding models can extinguish user agency. Possibly true. But the capital allocation can also signify a new technical revolution, propelling humanity to new heights like the agricultural and industrial revolutions. Indeed, that seems very likely. And the value created by this new human invention will likely return a lot more capital than it consumes.

Norman Lewis (a good friend) in one of the Essays this week makes the point that the decline in the authority of human institutions is reinforced by a technocratic faith in AI. It's a good essay and right about institutions. But I don't think technocrats are the problem, rather the anti-tech lobbies.

I prefer the A16Z essay about the end of post modernism and its replacement by optimism and big thinking, especially predictive thinking.

The obvious AI overbuild will be productive if we pair it with human gains and measurable outcomes. Growth of GDP, lower working hours and days, increased human freedom of choice about how to spend time. These are the outcomes it can make realistic.

What to watch next

The narrative aboput "round trip" cash, implying a suspect use of money between incumbents. Its wrong, but the anti-tech mood of the times is fueling it.

Google's tightrope: How much screen real estate do AI summaries take - and what's the publisher payout model behind them?

Sora's revenue share: Do granular character controls reduce friction or fragment rightsholders into stalemate?

Capacity vs. demand: Does token growth re‑accelerate - or does unit‑economics throttling persist into 2026 orders?

Open vs. closed: Can Reflection's $2B open lab prove safety and governance at frontier scale - or does openness hit a hard wall?

Bottom line: OpenAI wants our attention. The price is gigawatts, defaults, and a settlement layer we can trust. The payoff is human progress. There are many possible futures and that is one of them. If we end up with one big AI that owns the front door to knowledge and is good enough then I am OK with it.

What would be left would be to figure out how to allocate the newfound wealth to the whole of society, and uplift the species, not only the few. Altman's Worldcoin is one take on how to do that. At least he is thinking about it.

Essay

I've Seen How AI 'Thinks.' I Wish Everyone Could.

WSJ • October 9, 2025

Essay•AI•LLMs•Interpretability•Model Transparency