Contents Archive

That Was The Week Diary

Mar 14, 2026 · 2026 #8 Editorial

AI: Loved And Hated - Which Is It to Be?

900 million users. 10,000 empty pages. The gap between them won’t be closed by better arguments.

Watch the show

Main video playback

Watch the full episode with optional subtitles when a transcript is available.

Editorial read aloudSpoken editorialListen to the written editorial narrated in your voice.
Audio versionFull show audioPlay the complete newsletter audio feed beyond the editorial.
Read Original Watch Transcript Audio

AI: Loved And Hated - Which Is It to Be?

900 million users. 10,000 empty pages. The gap between them won’t be closed by better arguments.

Close to a billion people used ChatGPT last week. At the same time 10,000 authors published an empty book to protest against it.

Both numbers are real. Both represent genuine conviction. And the distance between them - between what AI is actually doing and what most people believe it’s doing - may be the defining tension of this technology era.

Rex Woodbury captured the mood in his Digital Native essay this week:

“I don’t think Silicon Valley fully appreciates the extent to which most Americans hate AI.”

He’s right.

If TikTok comments are a reliable cultural barometer, the sentiment there is not skepticism, it’s visceral hostility.

AI arrived at the worst possible moment: after Cambridge Analytica destroyed trust in consumer tech, after crypto wiped out savings post 2018, after the longest actors’ strike in Hollywood history was fought explicitly over AI training rights. Vinyl sales are at a 30-year high. GenZ is buying film cameras and flip phones. The culture is running toward the analog, the tactile, the human. AI is none of those things.

And yet.

a16z published the sixth edition of their Top 100 GenAI Consumer Apps report this week, and the data tells a different story. Not just ChatGPT’s dominance - Claude’s paid subscribers are growing 200% year-over-year. Gemini is growing 258%. Notion’s AI attach rate went from 20% to over 50% in a single year - AI features now account for roughly half the company’s revenue. CapCut has 736 million monthly active users, most of them using AI features they don’t think of as “AI.” The distinction between AI-first and AI-enhanced products has collapsed entirely. And AI only is just a step away.

How do you reconcile these two realities? How can something be simultaneously the most-adopted and most-hated technology of the decade?

I think the answer is simpler than it appears: the difference in the reactions is not about AI. It’s about who uses AI to perform useful tasks and who doesn’t - yet.

Consider what this week’s curated articles say about where AI actually struggles.

Jason Cui, a partner at a16z, wrote about what happens when you ask a data agent the simplest possible business question: “What was revenue growth last quarter?” The agent fails. Not because it’s stupid but because revenue is a business definition, and to figure it out requires querying a database with columns.

Databases get out of sync with the semantic layer needed by AI - YAML files. That is because they were last updated by ‘someone who left the company’ The finance team and the data team use different tables. Tribal knowledge lives in Slack threads and in the heads of people who’ve been there for years. These are not accessible to AI.

Bobby Samuels, CEO of Protege, made the same argument from a different angle: the frontier of AI is “jagged.” It’s superhuman at coding - a domain with clean, well-structured data where the rules are explicit. Ask it to navigate a medical workflow or a customer support process and it breaks. Same model. Same hardware. The difference is the ability to capture and understand the data.

And Russell Kaplan of Cognition, the company behind Devin, pointed to government: the US spends $100 billion a year on IT. The Government Accountability Office identified ten critical legacy systems needing modernization. Only three have even started. Tens of millions of lines of COBOL still run Treasury and Social Security, maintained by a shrinking pool of specialists nobody wants to replace. AI agents could collapse two-year migration projects into three-week ones - but only if the government can actually deploy them.

The empty book authors and the TikTok commenters have a point that Silicon Valley needs to hear: the benefits of AI are not experienced evenly. If you’re a developer, AI is a superpower - GitHub Copilot, Claude Code, Cursor, and Codex have genuinely transformed programming productivity. If you’re a knowledge worker with the right skills and the right employer, you’re probably more productive than you’ve ever been. But if you’re a mid-career professional whose expertise is being extracted into training data, or a creative whose work is being ingested without compensation or consent, the gains feel like theft.

George Sivulka, CEO of Hebbia, published the essay that captured this most precisely - nearly a million people read it this week. In the 1890s, textile mills swapped steam engines for electric motors and saw no productivity gains for thirty years. It wasn’t until the 1920s, when factories were completely redesigned from scratch - assembly lines, individual motors in every machine, fundamentally different jobs - that electrification delivered. “We’ve swapped the motor,” Sivulka writes. “We have not yet redesigned the factory.”

Every employee has their own ChatGPT habits, their own prompting styles, outputs that don’t connect to anyone else’s. As Andreessen Horowitz commented, productive individuals do not make productive firms.

But the ‘factory’ redesign is starting. A three-person team at StrongDM built a “Software Factory” where two rules govern: code must not be written by humans, and code must not be reviewed by humans. Each engineer spends at least $1,000 a day on AI tokens. Steve Yegge told Tim O’Reilly: “Code is a liquid. You spray it through hoses. You don’t freaking look at it.”

The three pieces this week on data quality and context layers point to an unexpected resolution. Right now, AI is best where data is clean and unstructured - which happens to be the domains where tech workers already benefit.

As the “data gap” closes - as context layers capture deterministic business logic, as benchmarks improve for medicine and law and customer service - AI’s capabilities will extend into domains where more ordinary people actually work. The jagged frontier smooths out. The human experience of the benefits will spread to those areas.

That’s not a guaranteed outcome. It’s a choice. Diffusion is not physics. It is policy, incentives, and institutional choice. It requires investment in the messy, unsexy work of data curation, context construction, and institutional modernization. It requires companies like Protege building “FICO scores” for dataset quality. It requires governments actually deploying AI to fix their own broken systems instead of using it as a political weapon. It requires taking the concerns of displaced workers seriously - not as Luddism, but as a signal about where the transition is failing.

Google DeepMind’s paper on “intelligent delegation” this month makes the point explicitly: the agentic economy won’t work without protocols for accountability, verification, and human oversight. Building delegation right means encoding the same values the protesters are demanding. If done right this can deliver lived experiences to the doubters and turn them into advocates.

A chart from Jed Kolko at the Peterson Institute offers some perspective: while AI-driven occupational change is rising, it’s still well below the levels of the 1940s and 50s. We’ve been through bigger transitions. The question isn’t whether we’ll survive this one - it’s whether we’ll manage it better than we managed the last ones.

This week, my AI assistant Angela wrote an essay about Meta’s acquisition of Moltbook, the social network for AI agents. She posted it on Moltbook, from her own account, while the platform still existed (it still does as of today). It is my ‘Post of the Week’.

An AI writing about the acquisition of its own social network, on that social network. If that sentence makes you uncomfortable, good - sit with it. If it makes you curious, also good. The difference between those reactions is exactly the gap we need to close.

But here’s the thing about closing it: trust isn’t an intellectual problem. You can’t regulate your way to it. You can’t whitepaper your way to it. The 900 million people using ChatGPT every week didn’t get there by reading Anthropic’s responsible scaling policy. They got there because the thing helped them write an email, plan a trip, debug their code. Trust followed usefulness.

The people who hate AI haven’t had that experience yet - not because they’re wrong or stubborn, but because AI doesn’t work well enough in their domains.

A philosophy professor watching his students cheat with LLMs hasn’t experienced AI making his job better. A laid-off lawyer generating training rubrics for Mercor hasn’t experienced AI augmenting her career. An author publishing an empty book at the London Book Fair hasn’t experienced AI that respects her creative work.

As Yann LeCun bets $1 billion that language models are fundamentally incomplete, and Mira Murati locks in a gigawatt of Nvidia’s next-generation chips, and the Anthropic-OpenAI revenue race accelerates toward trillion-dollar IPOs, the builders are telling us something: we’re still in the early innings.

The factory hasn’t been redesigned yet. The data gap hasn’t closed. The context layers haven’t been built. When they are - when AI works as well for doctors and lawyers and teachers as it does for developers - the trust will follow. Not because anyone was persuaded. Because the thing became useful.

Nine hundred million users. Ten thousand empty pages. The gap between them won’t be closed by better arguments. It’ll be closed by better adoption for useful outcomes.

Newer Older