Human¹⁰⁰
Why This Week Proves AI Is Our Greatest Invention, Not Our Replacement
The Year of Intelligence
If you only read the headlines this week, you'd think Google had already won the AI race. Gemini 3 benchmarks, Nvidia's earnings, and endless talk of "AI bubbles" paint a picture of a single triumphant stack.
But when we zoom out across this week's stories, a different pattern emerges: we're not in the year of Gemini; we're in the year of intelligence itself.
On coding and agents, Anthropic and OpenAI quietly rewrote the script. Claude Opus 4.5 now tops SWE‑bench, agentic coding, and ARC‑AGI‑2, with customers reporting 20% accuracy and 15% efficiency gains on real Excel workflows. GPT‑5.1 Codex‑Max pushes automated software engineering further: higher SWE‑bench‑verified scores, hours‑long persistence on METR's "AI 2027" graph, and meaningful jumps on internal AI‑R&D tasks like Paperbench‑10 and MLE‑Bench. Gemini 3 Pro is impressive - but it is one curve on a crowded chart, not the finish line.
When Anthropic can credibly say Opus 4.5 is the "best model in the world for powering agents," and Zvi's analysis of Codex‑Max still points to incremental, not explosive, timelines, the idea that Google has structurally "pulled ahead" looks more like marketing than reality.
Meanwhile, the ecosystem is fragmenting in ways that favor no single winner: - China's open‑weight push is leapfrogging US incumbents in downloads and developer mindshare. Open models like Qwen are becoming default infrastructure across the Global South, while Chinese giants move training offshore to access Nvidia chips despite US export controls. I
f your AI worldview starts and ends in Mountain View, you're missing half the board. - HSBC's estimate that OpenAI must raise $207bn by 2030 crystallizes the capital intensity of staying at the frontier.
Nvidia's blowout earnings, Ben Thompson notes, tell us less about bubbles and more about a new industrial stack where power and data centers are the constraint.
"To that end, I think the hand-wringing about OpenAI in particular has gotten a bit out of hand over the last few days. For one, now that people are putting Gemini 3 through its paces, it's clear it's not a perfect model; in particular, it seems to hallucinate much more than GPT 5.1 Thinking, and it's not very good at following directions.
Indeed, per the point above, the biggest improvements do seem directly downstream from the sheer size and associated compute that went into developing it; that not only reiterates the bull case for Nvidia, but also suggests that upcoming models from OpenAI (and Anthropic and xAI, for that matter) should see big leaps as well, especially the ones that are trained on Blackwell (GPT-5 was trained on Hopper)."
This is not a tidy "Google vs OpenAI" duel; it's a trillion‑dollar build‑out where governments (Trump's Genesis Mission), sovereign wealth funds, and retail‑oriented CEFs are all piling in. - Venture plumbing is mutating to match.
Meanwhile there are seismic changes in later stage Venture Capital, especially regarding liquidity.
Closed‑end funds, secondaries as a first‑class tool, and brutal VenCap data (only 6% of funds ever hit 5x) tell us that LPs are trying to surf this wave without drowning in illiquidity or hype.
Bubble talk hasn't slowed term sheets on Sand Hill; it has just forced better structures. At the same time, the "AI vs creators" panic is giving way to negotiated coexistence.
Platformer's "AI is winning the copyright fight" and Warner Music's settlements with Suno and Udio point the same way: labels are folding courtroom maximalism into licensing deals. Training goes on; artists get some knobs and some cash.
As with PopEVE in rare‑disease genomics or Apple‑backed rural classrooms in Alabama, the real story is deployment: AI as infrastructure for diagnosis, education, and culture, not an apocalypse.
The deeper parallel belongs to the Enlightenment thread this week: Britain's rise wasn't just steam and coal; it was a cultural shift toward belief in progress. Our version is playing out in real time.
The Industrial Revolution Forged In Ideas
14.7MB ∙ PDF file
Download
Download
Look at Sutskever's interview about the "age of research", acknowledging that scaling alone isn't enough to get to "reliable generalization", to Sakana.ai's white paper on "Continuous Thinking Machines". There are significant new technologies being developed and not yet at the product stage. Demis Habassis at DeepMind concurs with the need for generalization too.
Architecting Reliable Generalization
12.2MB ∙ PDF file
Download
Download
Continuous Thinking Machines
8.89MB ∙ PDF file
Download
Download
And add to that Beckert's reminder that capitalism is historical, not natural, we're being handed the same political choice: do we oppose Sam Altman, Larry Page, Sergey Brin, Elon Musk, and the rest, and cloud tech companies, from driving the gains because we don't like rich people or big companies? Or do we embrace that capitalism is the only way to get these gains feeding global wealth? Do we use these productivity gains to shorten the workweek and broaden prosperity? And do we build policy based wealth distribution models that benefit from the gains AI will deliver?
The models are getting smarter; it is also likely that we will.
Looking ahead, we should stop asking "Will Google win?" and start tracking three harder questions:
Capabilities: Do agentic systems like Opus 4.5 and Codex‑Max start closing real human bottlenecks in R&D, not just coding demos?
Capital: Do structures like CEFs, secondaries, and public co‑ownership tame a $200tn asset world, or simply repackage risk for retail?
Civics: Can projects like Genesis Mission and the Suno/Warner deals become templates - where the state, creators, and capital share upside - rather than one‑off headlines?
This really is the year of intelligence. Not Google's intelligence, or OpenAI's, or China's - but ours, in deciding what we do with it. A belief in 'progress' is foundational to human outcomes.
Essay
Maybe the S&P 500 will triple?
Ft • November 24, 2025
Essay•Venture