Who Gets to Tell the AI Story?
The Answer Matters
### Who Gets to Tell the AI Story?
Three narrative machines are running full tilt this week. Each wants to define what AI means before we can figure it out for ourselves.
The first is the doom machine. A new documentary, "The AI Doc: Or How I Became an Apocaloptimist," opened in theaters on March 27. It features a parade of interviewees who, as David William Silva observed,
> "describe AI-driven extinction with the calm confidence of people who have said these things so many times they have stopped noticing they have no evidence for them."
The film positions Tristan Harris — whose Center for Humane Technology received $500,000 from the Future of Life Institute for "messaging cohesion within the AI X-risk community" — as a neutral voice in the middle. Harris told the AP his hope is that the film becomes "An Inconvenient Truth" for AI. That comparison should alarm you. His previous efforts — "The Social Dilemma" and "The AI Dilemma" — were exercises in manipulative hyperbole dressed as public education.
The documentary lets three factual falsehoods pass unchallenged. That Anthropic's Claude decided, unprompted, to blackmail someone — when in fact researchers iterated through hundreds of engineered prompts to produce that outcome. That AI is "less regulated than sandwich shops" — when state attorneys general from both parties and FTC Chair Lina Khan have explicitly said existing laws already cover AI. That data centers threaten drinking water — based on a book that had to issue corrections after a key figure was off by a factor of 4,500.
Silva names the incentive: "The believers are a market. As long as the ratio stays favorable, the machine is profitable." The doom industry isn't confused. It's commercial. AI doom-mongering is a business. The more people believe AI will destroy the world, the more money flows to those selling that fear.
The second narrative machine is corporate. This week OpenAI acquired TBPN, a daily tech talk show with 70,000 viewers per episode. The show will report to Chris Lehane — OpenAI's chief political operative, the man who built the crypto super PAC Fairshake — under "strategy," not communications.
As Om Malik writes:
> "You don't put an editorially independent media property under your political operative."
The press release mentions editorial independence four times and coins a new term, "Editorial Independence Covenant." Fidji Simo's justification:
> "The standard communications playbook just doesn't apply to us."
Om's historical parallel is blunt: Lenin argued in 1902 that his revolution needed its own newspaper. He named it Pravda — truth. As it happens Lenin was right. OpenAI maybe less so.
This sits alongside the broader OpenAI IPO construction. A $122 billion round at $852 billion valuation. Amazon's $50 billion anchored to an AWS contract. ARK ETFs distributing OpenAI shares to retail investors before a filing exists. Banks extending a $4.7 billion credit line that doubles as an underwriter audition.
As Packy McCormick argued this week, the laziest move in all of this is the analogy:
> "you could have said the same thing about Amazon."
The correct response is: show me the negative working capital engine. Show me the unit economics that improve with scale. Show me the 1997 letter. But most likely OpenAI CAN show those things. It reports $2 billion a month of revenue and growing.
The third narrative machine is financial. Fundrise's VCX fund debuted at ~$700 million and surged to $6.5 billion within three days — trading at 30 times its net asset value. That's not price discovery. That's retail investors paying any price for a story about access to private markets.
Stanford's endowment CIO Rob Wallace says there are "likely only 10–12 early-stage VCs in the US who generate the majority of profits." PitchBook's latest report documents the fraud spreading through venture secondaries — SPVs investing in SPVs, layered fees, no standardized way to verify that the person selling you "OpenAI shares" actually holds them. The story of access is being sold harder than the access itself. VCX has now pulled back from a peak $575 a share to around $130. The true value of its assets is $18-19. Media does drive irrational behavior.
So: doomers selling fear, corporations selling narrative, financiers selling access. Three machines, all running hard. What do they have in common? They all assume you need someone to interpret AI for you — to tell you whether to be afraid, excited, or invested.
Now look at what the people actually studying and building the technology found this week.
John Burn-Murdoch at the Financial Times tested four major LLMs against 61 policy questions using simulated users across the ideological spectrum. Every model nudged responses toward the center. Conspiratorial beliefs overrepresented on social media were nearly absent from AI outputs. Paul Kedrosky flagged the structural explanation:
> "social media profits from engagement and clicks, so inflammatory content gets amplified; AI models are trained on vast corpora skewing toward published, edited, expert-legible text."
Derek Thompson reviewed randomized trials and meta-analyses on the Smartphone Theory of Everything — the idea that phones explain rising anxiety, polarization, and declining youth mental health.
His finding: phones are global, but their worst effects are concentrated in a handful of rich, English-speaking countries. Youth happiness plummeted in the US and UK while rising in Eastern Europe and East Asia. Phones aren't the cause; they're an accelerant interacting with distinctly American conditions. The technology doesn't override culture. It amplifies whatever's already there.
Thomas Ptacek — one of the most respected voices in security research — wrote that within months, finding zero-day vulnerabilities will be as simple as pointing a coding agent at a codebase.
This is a real, consequential development. Not speculative. Not theatrical. Anthropic's own red team demonstrated it: a bash script looping Claude Code across a repository produced 500 validated high-severity vulnerabilities. The targets that won't cope aren't the AI companies. They're routers, printers, hospital systems, regional banks — anything that requires someone to physically push a button to patch.
Richard Schenk, writing about Europe's digital regulation, identified the philosophical assumption underneath the narrative machines:
> "The more individuals are perceived as determined by forces beyond their control, the stronger the paternalistic temptation to 'correct' outcomes through centralised intervention."
Post-war liberal democracy assumed citizens were rational agents capable of judgment. The new regulatory impulse assumes they're vulnerable, manipulable, and need protection from their own choices. Interestingly these paternalistic, and ultimately anti-democratic, impulses come mainly from the "left".
That assumption — that humans can't handle what AI puts in front of them — is shared by the doomers, the regulators, and the corporate narrative-builders alike. It's the one thing Harris, the EU AI Office, and OpenAI's strategy division agree on: ordinary people need mediators.
The evidence from this week says otherwise. Chatbots moderate rather than radicalize. The Smartphone Theory collapses on contact with cross-cultural data.
Actual AI risks — vulnerability research, supply chain attacks, scaling limits — are concrete, measurable, and addressable by competent engineers and policymakers, not by documentary filmmakers or political operatives.
The biggest risk this week isn't AI. It's letting the people with the loudest megaphones — and the clearest financial or political incentives — define what AI means before the rest of us figure it out for ourselves.
Humans are clever. They always have been. The question isn't whether AI needs to be explained to them. It's whether anyone will let them think and act for themselves.
---
### Contents
- [Editorial](#editorial) - [Essays](#essays) - [I Saw Something New in San Francisco](https://www.nytimes.com/2026/03/29/opinion/ai-claude-chatgpt-gemini-mcluhan.html) - [OpenAI Investor Says AI Requires an Income Tax Overhaul](https://www.ft.com/content/7de1d3c5-0d0c-46b1-b2b7-dbf6f5226069) - [Chatbots as Anti-Social Media](https://paulkedrosky.com/chart-of-the-weekend-chatbots-as-anti-social-media/) - [America's Civil Service: A History](https://www.chinatalk.media/p/americas-civil-service-a-history) - [The Hidden Anthropology Behind Europe's Digital Regulation](https://x.com/DIObservatory/article/2037920331965378954) - [National Capitalism](https://blog.joinodin.com/p/national-capitalism) - [Is the Smartphone Theory of Everything Wrong?](https://www.derekthompson.org/p/is-the-smartphone-theory-of-everything) - [The Fix Is In](https://om.co/2026/03/31/openai-the-fix-is-in/) - [OpenAI Acquires TBPN, a Daily Tech Talk Show](https://www.theverge.com/ai-artificial-intelligence/906022/openai-buys-tbpn) - [Masters of Agitprop 2.0](https://om.co/2026/04/02/openai-masters-of-agitprop-2-0/) - [Bad Analogies](https://www.notboring.co/p/bad-analogies) - [Venture](#venture) - [Stanford's Endowment CIO on Venture: Only 10-12 VCs Generate the Majority of Profits](https://www.linkedin.com/posts/marcelinopantoja_stanfords-endowment-cio-rob-wallace-was-share-7444526776665444352-kHMX) - [When Access to VC Becomes a Liability](https://pitchbook.com/news/reports/q1-2026-pitchbook-analyst-note-when-access-to-vc-becomes-a-liability) - [AI Seed Startups Are Commanding Higher Valuations Than Ever](https://techcrunch.com/2026/03/31/its-not-your-imagination-ai-seed-startups-are-commanding-higher-valuations/) - [Pareto to Creato](https://dougshapiro.substack.com/p/pareto-to-creato) - [AI](#ai) - [Why OpenAI Killed Sora](https://www.theverge.com/ai-artificial-intelligence/902368/openai-sora-dead-ai-video-generation-competition) - [Silicon Valley Learns to Love the Government — at Least When It's Friendly](https://www.newcomer.co/p/silicon-valley-learns-to-love-the) - [Anthropic Knew the Math. It Sold the Tickets Anyway](https://www.implicator.ai/opinion-anthropic-knew-the-math-it-sold-the-tickets-anyway/) - [Veblen & Jevon Walk Into a Data Center](https://www.tomtunguz.com/jevons-to-veblen/) - [AI Applications and Vertical Integration](https://www.tanayj.com/p/ai-applications-and-vertical-integration) - [Vulnerability Research Is Cooked](https://sockpuppet.org/blog/2026/03/30/vulnerability-research-is-cooked/) - [Claude Code Leak Exposes Source, Upcoming Features, and an Always-On Agent](https://www.theverge.com/ai-artificial-intelligence/904776/anthropic-claude-source-code-leak) - [Baidu's Robotaxis Freeze in Wuhan, Trapping Passengers and Snarling Traffic](https://www.theverge.com/ai-artificial-intelligence/905012/baidu-apollo-robotaxi-freeze-china) - [Mercor Hacked via LiteLLM Supply Chain Attack](https://techcrunch.com/2026/03/31/mercor-says-it-was-hit-by-cyberattack-tied-to-compromise-of-open-source-litellm-project/) - [Anthropic Responsible Scaling Policy v3: Dive Into the Details](https://thezvi.substack.com/p/anthropic-responsible-scaling-policy-46a) - [Chatbots Are Now Prescribing Psychiatric Drugs](https://www.theverge.com/ai-artificial-intelligence/906525/ai-chatbot-prescribe-refill-psychiatric-drugs) - [Sycophantic AI Decreases Prosocial Intentions, Stanford Study Finds](https://techcrunch.com/2026/03/28/stanford-study-outlines-dangers-of-asking-ai-chatbots-for-personal-advice/) - [AI Doc Review](#ai-doc-review) - [The AI Doc's Falsehoods and False Balance](https://www.techdirt.com/2026/04/02/the-ai-docs-falsehoods-and-false-balance/) - [Are You an AI Apocaloptimist?](https://www.futureofbeinghuman.com/p/are-you-an-ai-apocaloptimist) - [Hollywood Just Packaged AI Anxiety and Is Bringing It to Theaters](https://davidwsilva.substack.com/p/hollywood-just-packaged-ai-anxiety) - [Regulation](#regulation) - [Meta Has Discussed Ending Funding to the Oversight Board](https://www.platformer.news/meta-oversight-board-funding-cancel/) - [Infrastructure](#infrastructure) - [Mistral AI Raises $830M in Debt to Build a Data Center Near Paris](https://techcrunch.com/2026/03/30/mistral-ai-raises-830m-in-debt-to-set-up-a-data-center-near-paris/) - [Interview of the Week](#interview-of-the-week) - [What If It's a Bunch of Shit? — Dr. Margaret Rutherford on Keen On](https://keenon.substack.com/p/what-if-its-a-bunch-of-shit) - [Startup of the Week](#startup-of-the-week) - [Post of the Week](#post-of-the-week)