Apr 4, 2026 ยท 2026 #11. Read the transcript grouped by speaker, inspect word-level timecodes, and optionally turn subtitles on for direct video playback
Edit labels for this show, save them in this browser, or download a JSON override for the production folder.
Transcript Playback
Who Gets to Tell the AI Story?
Human Transcript
Timed transcript
Blocks are grouped by speaker for readability. Expand a block to inspect word-level timing.
Speaker 1
It started with a simple game, but the AI grew smarter, faster, until it didn't need us anymore. That's when the danger truly began. And that, my friends, is where our tale ends for tonight. Thank you, John. Excellent, as always. My pleasure.
Hello, everybody. It's Saturday, the 4th of April. If it's a Saturday, it must be that was the weekday. Our summary with Keith Teer of everything interesting that's happened in technology this week. We're both out on the West Coast, so we have a front row seat in this new movie. And speaking of movies, last week I told Keith that he needed to go and see the AI doc. I'd seen it the previous week. This week, we're going to talk about it a little bit more. So Keith, who's a very obedient fellow, went off with his wife, Jeanne, to a movie house in Palo Alto. It's good they still have them there, Keith, to see the AI doc or How I Became an Apocalypse. Loctimist, meaning, in other words, how I got confused about AI. Lots of reaction online. We'll get to that, Keith. But before we get that on the movie, what do you think of it? What did you think?
But... Especially for you, you multitasker. I hope you didn't whip out your smartphone in the film, did you? I did not. And in my local cinema, which I go about three or four times a week, the Alamo, you're not allowed to take smartphones in. So if you're staring at your screen in the movie, you get thrown out, which is a good thing.
Yeah. Well, actually, I went to the Alamo in Mountain View, so... Very similar. They must have got a distribution deal with the Alamo. But back to the movie. Look, I think it's a massive failure of a movie because it pretends to be talking about AI either as a problem or as a solution and ultimately does neither. It doesn't establish a problem and it definitely doesn't establish what kind of a solution it is for what problem. And it brings both optimist and pessimist to the table. In that sense, you know...
Yeah, exactly. And in both cases, the optimist and the pessimist... it limited itself to declarations without substance. There was never an explanation of what the negative case is or what the positive case is. So it ended up being a voyeuristic kind of misogynist, you know, peeking behind the curtains of opinion, which really didn't lead anywhere.
well but but i think i mean i don't necessarily disagree but i think the interesting thing about the film is this director daniel rojo is pretty successful young director obviously very talented the central narrative in the film is that daniel co-director goes out to figure out what this ai thing is particularly in the context of being a new parent and the the narrative is that his wife gets pregnant during the movie and so he wants to know whether or not he is firstly whether or not you should have a kid and secondly what kind of world this child will inherit an ai world so in a sense that's a very pertinent narrative it's very typical in the sense that um Everyone's thinking that the next chapter in our narrative, our collective narrative, will be AI. So that's a fair beginning to the story, isn't it, Keith?
Yeah, no, I think his initial opening statement of motivation is fine. It's totally fine. And he holds that thread through the movie. His wife is pregnant and gives birth. And by the end of the movie, the child is visibly about one years old, I think. So that probably does reflect the thoughts of a lot of people. Because it's pretty hard, unless you're an expert, to get into the mechanics of either side. So you do end up being an observer of a debate without really having the tools to fall down on one side or the other, except for your natural instinct. So I think that he captures that angst on the doomster side, and he captures the over-optimistic zeal on the other side, But in both cases, given that I do actually know how it works and what is really happening, it leaves you dissatisfied.
Well, maybe they should have interviewed you, Keith. That was the mistake. In your editorial, you suggest that the film positions Tristan Harris, who's become perhaps the most articulate and a successful critic of technology, not just of AI, but of social media as well, as a moderate or somewhere in between. I'm not sure that was the case, though. I mean, was Harris presented as the voice of reason in the film?
No, I think he was definitely on the, you know, let's just label it doomster, but that might be an unfair word for him. But he was definitely on the doomster side. And, you know, when you leave it, I said to my wife, who is naturally inclined to be a skeptic about AI, you know, what do you think? In what sense? You know, she... fears the replacement of human agency by tech, and that leading to worse outcomes, worse outputs. She's not in the it's going to kill us all world, it's more that it's going to make our lives less interesting kind of world. And I asked her about the movie at the end, and the point we both agreed on is The only opinion you could come out of the movie with is a negative one of AI. There's no narrative in the movie that would allow you... I would disagree. I mean, one of the other... Finish the point. There's no narrative that would allow you to go in as negative and come out as positive. I'm not sure.
I mean, the... The other piece of the Roha, the co-director and the main character in the film, trying to figure out the meaning of AI, was his father had a rare form of cancer, and it was acknowledged, particularly in conversations with some of the more senior AI people, that this might help him. So, I mean, you broadly talk in your editorial, and it's not just about the AI doc, about Who gets to tell the AI story? I mean, don't we all? I mean, what does this even mean? Who gets to tell the AI story? You're sounding like a wokey type, Keith.
Well, look, who gets to tell the story of brain surgery? Hopefully it isn't Mormons who are against surgery. Hopefully it's brain surgeons. because they understand brain surgery and they make all of your natural fears about somebody going inside your skull. They give it context and allow you to feel that there will be a good outcome because they understand it. Well, you trust them. And you trust them.
Well, some of us trust them. I mean, as we know from the COVID pandemic, the politics of COVID and even the RFK Jr. stuff these days. Not everyone trusts. I mean, they certainly don't trust journalists and they don't even trust doctors. Anthony Fauci has become the antichrist for many people. So some brain scientists are considered ideologues of one kind or another.
Well, I think it changes when the discussion is social policy versus cure. You know, a brain surgeon is about cure. Social policy is a different sphere. And Fauci was very much in the world of social policy around COVID, not diagnosis or science. it was the science of social policy that he was focused on. And their opinion flourishes because in civil society, you're allowed to, and in fact, we should encourage many, many different opinions. But that doesn't mean any of them are right. They're just opinions. And so I think the headline this week was the catalyst for it was OpenAI acquiring TBPN.
Yeah, this was another piece of surprising news to put it mildly open ai by streaming show tpbn aiming to change the narrative on a and this comes back to your your theme of who gets to tell the ai story do you think that the story behind this story is that the people at open ai feel that the ai story isn't being told fairly so they bought they bought a company that has a It's like buying TechCrunch in the Web 2.0 age. It's like as if Google or Facebook bought TechCrunch and you were on the front lines of that, too. I mean, it would have been slightly absurd, wouldn't it?
Yeah. Well, eventually. AOL bought TechCrunch, and it bought it as a media business. OpenAI, as Om Malik says in his piece about this this week, OpenAI is buying what Om calls a propagandist and an agitator. He quotes Lenin as saying, when Lenin started Pravda, He did it because he wanted to have a media outlet that he could use to educate the masses, as it were. Well, OpenAI is buying something. Even though it claims editorial independence, you don't really buy a media outlet unless you want to influence the message.
Right. And it's a very odd decision by OpenAI, especially given in the last few weeks that there's been all these stories about them. Focus, focus, focus. emergency alert, get rid of their video. algorithm and all the rest of it and now they've distracted themselves by buying a media company the uh the time says that this was driven by fiji simo a top open ai executive who had been impressed with the show's marketing instincts uh is that your reading uh and the uh apparently uh they're all going to report to chris lahane the uh perhaps the most invisible power broker in Silicon Valley, who lives just up the road from here. My wife and his wife are very close friends. It's all a bit weird, isn't it?
Well, weird... weird yes i think i will give a tick to that word but it also probably denotes a moment when open ally feels it isn't winning the messaging war probably vis-a-vis anthropic actually um i i don't think it's the messaging war against doomsters i think it's the messaging war against anthropic yeah that's a good point um and that
was another i think we we covered this last week piece in the Wall Street Journal by Keith Hagey, a very good writer, written a book, who just wrote a book on Google and open AI, who speaks of the increasingly personalized nature of the competition between open AI and anthropic. It reflects, I'm getting the sense, Keith, you'll probably deny this, that you're beginning... to slightly doubt open AI. You've always been the ultimate open AI guy, but now you're beginning to think, hmm, maybe they're not quite as inevitable as I used to think. Is that fair?
No, look, I think if you look at this from the TBPN point of view, You've got to say, why did they sell? They obviously don't really care about journalism.
Yeah, I don't care about TV. I mean, that's the other part of the deal. I read a rumor that it was in the hundreds of millions. I mean, how much did TechCrunch sell to AOL for? Was it 20, 30 million?
Right, that certainly wasn't in the hundreds. So why would they turn down a hundred plus, maybe 200 plus million deal to be acquired by OpenAI? They're set for life, these guys. Go and do another one.
Well, then if you look at it from the OpenAI side, I think you have to say it's a smart move. It kind of reflects last week's message when we said OpenAI is growing up. They're abandoning peripheral things.
is vulnerable she's well she's more than vulnerable because this is the other piece of news that you didn't include um all these executive changes are open ai brad light cap the long time coo which is the number two man in the company is now going to lead special projects um the chief marketing officer is stepping down and the uh the the fiji simo uh is uh is taking medical leave for several weeks to seek a new treatment for a rare disease she has. So there's a lot of executive churn. Again, it doesn't necessarily suggest stability on Sam's ship, does it?
You're a startup guy, Keith. You've run these things. You know how it works. I mean, all these senior people, the COO shifting to special projects doesn't sound very reassuring, does it?
OpenAI is the biggest and fastest growing startup the world has ever seen, ever, compared to nobody, compared to Tesla, compared to SpaceX. There's nothing like it. It's many times bigger than Anthropic.
I don't think so. I think the market's growing. OpenAI still owns the bulk of the market. And Topic's growing as well. I think catching up is a difficult thing to prove. I'm not convinced I would go with that. If it was true, I wouldn't mind admitting it. But I'm not convinced it's true. But they're both great companies. I mean, they're number one and number two in the biggest startups, fastest growing startups ever.
But we always have that in tech. Every generation, whether it's Web 1 or Web 2 or Web 3 or AI, they're all the biggest because that's the nature of the economy.
Yeah, but I'm trying to address the quote you read out about the people. When you're in the middle of that kind of a scenario, stability is your enemy i mean you really need to be discussing your strategy probably weekly and re-addressing your priorities probably monthly and changing the deck chairs on the on the ship um appropriate to your conversations so i don't think this instability is a bad thing i think it's a sign of life not a sign of death
Well, you gave away the deck chairs is the Titanic quote about shifting the deck chairs on the Titanic when you're about to hit the iceberg. We will see whether OpenAI hits the iceberg or whether Fiji Simo will save the company by acquiring TPN. There's a third question. strand of who gets to tell the AI story in your editorial, which in a way is probably more interesting than even OpenAI buying a media company or this AI doc. It's the markets, Keith. What are the markets telling us? And do the markets tell the story accurately? Is this the best way of actually gauging what's happening?
Well, markets are a combination of a rear view mirror and speculation about the future. So they never actually tell the story. They're a pulse on the present, is the truth. It's interesting, one of our viewers on Facebook, Courtney Hamilton, has left a couple of chats saying that
And she says she can sense... Yeah, she's probably you. She's probably your AI. Who is Courtney? What's her name? Courtney Hamilton. Yeah, it's one of your girlfriends, Keith. You played her.
I don't trust Courtney Hamilton. And then she then says she can sense the moral panic in the voice of the interviewer, which is you. She doesn't even know my name. Poor old Courtney. Anyway, back to the question. I think the markets are pricing these companies
you know, very aggressively. And they're probably both going to IPO this year. Polymarket says that OpenAI might be more likely to IPO next year. So who knows? But you know what's interesting is Elon is going to beat both of them.
Oh, a boyfriend, no. I don't mind. I'm open-minded.
Words and timings
Oh,aboyfriend,no.Idon'tmind.I'mopen-minded.
Speaker 3
Hey, Courtney. You must admit, Courtney, it's a reasonable mistake. My name is Keith Tieran. In school, I was known as KT, and that got shortened to Kate. So I spent my whole teenagehood being Kate.
Definitely created a few tense moments after school, let's say that. But, yeah, so I think... We've said that OpenAI, or I've said, is probably going to end up being worth 10 trillion. I still think that. And by the way, I think Anthropic might be worth three to five.
We'll see that, you know, again, that's very long term. Let's focus a little bit more on the concrete. One of your critiques, which I think is a good one, and I think you and I probably agree. I mean, the New York Times review, I think was a good one, said that that the movie tried to cover so much, it ended up being more confusing than clarifying, but the parts were fascinating. I think that's a fair reflection on the film. I think what you said earlier is that no one really was defining what AI was, which is the problem with the film, because they weren't really using it. Maybe Daniel Roha, whether or not he was quite as...
Inexperienced, innocent as he claims, but he certainly didn't seem to, I mean, he was presented as someone who is the guy who knew nothing about AI and needed to be educated by all these people. But as you say, you need to know something about it. And there was a good op-ed in the New York Times, which you list in this week's newsletter by Ezra Klein, who is a very popular podcaster and writer, the author, of course, of Abundance or co-author of Abundance. And he writes, I saw something new in San Francisco, and he writes about AI as someone who uses it. He brings up the McLuhan quote, the famous McLuhan quote about do we use AI or does AI use us, which I think is particularly relevant. So what does Klein say about AI and why is this perhaps more useful to read that than watch the AI doc?
So Ezra Klein's piece really is about the triumph of agency. His visit to San Francisco resulted in him noticing how many people were using OpenClaw, which is this interactive personal assistant style agent that was released to open source and then acquired by OpenAI.
very similar to the TBPN acquisition. It retains its separateness, and it's now being run by Dave Morin, who's a well-known Valley venture capitalist and entrepreneur. And OpenClaw basically is an empowering tool for a human that gives the human a massive boost in productivity and control, actually. but it comes with some risks because you have to give it access to your computer, which I do. And Klein, who's on the East Coast, of course, was an observer of this in the same way that the movie was made by an observer and came away thinking that this is a kind of a change in the whole way things are used. Now, since then, Anthropic's clawed has morphed to move in the direction of open claw. It hasn't quite gotten there yet. It's way too hard to turn it into a... Yeah, but it will.
I think so. I've played with the efforts they've made so far, and they're not as good as open claw, but I want them to succeed because I kind of like the...
Right, but Klein's point is, he says, why we need a good dose of McLuhan, his famous line, which he may not have actually said, always the best lines aren't actually said by the people who we believe they said, we shape our tools and thereafter they shape us. And I think Klein's point is that this AI is shaping us as individuals. Yeah. And I think he's right that...
And I'm quoting him, he said, the effect is to constantly reinforce a certain version of myself. And that's what these AIs do. They pick up on aspects of ourselves and they push them. I mean, maybe they're complimenting us. Maybe they're trying to get us to self-improve. But I think it's an interesting observation.
Yeah, self-improvement is, of course, I think the motivation of all technology ever. I don't know why you would get interested in technology if it wasn't for self-improvement. And self can be collective. It can be individual and collective. And so you have to hand it to Ezra Klein. I think his abundance insight about a year ago after the election loss for the Democrats and his recognition of AI as self-improvement are both very humanistic in the humanist tradition of thinking and the enlightenment tradition of thinking. And not to see it that way would be bizarre. I don't know what other view you could have because then you'd have to endow AI with some kind of consciousness as a thing in itself.
Right, and I think that maybe coming back to the movie, one of the things that the movie should have done or could have done to make it a little bit more interesting is use a little bit of AI in it to show us, the viewer, because this is made, filmed for the viewer, to show us how AI could actually change a movie. It didn't do any of that, and it was very much a traditional top-down film Lots of graphics. In fact, the more graphics there were, the more confusing it became because all you were watching were all these images on the screen, but you weren't actually quite sure what they meant.
Well, even worse, Andrew, and we haven't made this point, so let me make this point. I think all three leaders that showed up to be interviewed, Sam Altman, Dario Amadai, and Demis Hassabis, none of them actually showed any leadership in addressing those questions. In fact, they're so paranoid by the moral panic that they come across as unconvinced that there will be a good future. All three of them.
missing people who actually use AI. Do you find that your interactions with whichever AI you're using, is it pushing certain versions of yourself? Is it creating more or less Keith tears?
Well, my use of AI exposes its weaknesses, mostly.
Words and timings
Well,myuseofAIexposesitsweaknesses,mostly.
Speaker 3
Of course, it has massive strengths. But the thing you notice as a user is its weaknesses. So this week, I did a few things. I've got a board meeting next week. And I have an agent that produces my board report based on some database queries.
I published the state of venture at the state of venture.com. And I did the monthly venture capital report, which is another agent. That was the week, the whole workflow involves agents all the way through, from headline writing, editorial, gathering the pieces, organizing the newsletter into something publishable. And
No, but I don't publish its editorial. I publish mine. If you'd see the one I don't publish, you would understand what I mean by its weakness. So what does that mean? Well, for me, it means I have the constant experience of using it a lot but being in control. I'm overriding it. more than I'm just accepting it.
It sounds like your wife, Keith. You're a traditional male in this marriage. Is that right? You're overriding it. And you're a one-man publishing business. And your post of the week is also about another one-man business, but this is a $1.8 billion company, Medvi, which you have a post about it, about how it's a $1.8 billion revenue company with two employees. It sounds like you and I. When are you going to sell us to Antropic for $1.8 billion, Keith?
Well, because they're all put off by my obnoxious, or you by you, when you misunderstood Courtney, thought it was a woman rather than a man. You need to be more careful, Keith, these days. Sometimes some of our viewers might be both simultaneously on different days. So you shouldn't jump to gendered assumptions. Absolutely right. Sorry, Courtney. I've now forgotten the question. The question is, this Medvi company, is this a two-people, $1.8 billion company? It's not really a $1.8 billion. I mean, it's not worth $1.8 billion, is it, Medvi?
It's probably worth more than that. What they do is they market GLP-1 drugs and they're the most successful working with compounding pharmaceutical companies to get GLP-1s to people who don't qualify for a prescription. And it's become a huge business. It's mainly a marketing business doing ads and they have a very specific way they do it on Facebook mainly. um and uh the 1.8 billion is the revenue number but the profit is about 30
courtney yeah but it is a two-man business who are they're obviously very good at using the tools and they've they've cracked the code on marketing uh glp ones well
the future is I mean, of course, the biggest headline will be when we have a $1.8 billion company not run by anyone. But all this comes back to the same theme. It's one that comes up every week, the issue of agency. And there was an interesting essay in Op-Ed in the New York Times this week. which I saw and encourage Keith to put in the newsletter by Sophie Hingey. I assume it's a young woman or certainly a woman. All the worst people seem to want to be high agency. And it's a polemic against the idea of agency. And I wonder, Keith, the more I think about it, whether agent, I mean, everyone talks about the end of the left, right, conservative, socialist distinction, but whether agency is the defining thing political quality, whether it could even be seen as an ideology in our age of AI.
Well, interesting, agency is most assumed to be a right-wing idea these days. I think it started with the backlash against modernism, Stalin and Hitler, and this narrative that crept into sociology.
they depicted modernism, which is the idea that you can consider the whole and try to change it. And that led to... I'm not sure everyone would agree with that, but anyway, go. So anyway, my point is that the left had an intellectual reaction against what was called dictatorial thinking, oligarchs and the like, which led to kind of identity politics, which is come by our land, let everyone be who they want to be. And in that context, agency is arrogant. It's somebody who believes that they have the right to influence outcomes, and it's considered arrogant. And so this article against agency is what I consider the left's abandoning of any historical agenda. If you have a historical agenda, you need to have agency. And so agency is the the precondition for making history. And so for me, agency is crucial. It's a good thing. And the idea that someone is arrogant because they have an opinion and want to persuade you to agree with their opinion. Well, what you're really saying is don't talk to me and let me just get on with my life. And I'm not going to be responsible for the future. I'm just going to live my life passively in the world that I was born into. So I do think it's a super important conversation.
Yeah, and there was an interesting, and this comes back to the OpenAI, not the OpenAI, the Meta YouTube case last week where they were found guilty. I think agency is really important. George Will had a, you didn't put it in the newsletter, but I think I said. Yeah, I did in the end. It's in. okay well i'm just putting it up on the screen the the question of agency is becoming increasingly important uh will it had an op-ed in the new in the washington post saying the verdict against matter and google carries sinister implications because it suggests that when we use these technologies like uh like uh instagram or youtube we're not really in control of ourselves so I'm not sure whether he would agree with the New York Times' Sophie Haney, but the issue of agency seems to be covering everything. And, of course, the way she sets it up is it's Silicon Valley versus the rest of the world because you're actually in Silicon Valley, I'm on the edge. But apparently high agency is... what everyone likes in Silicon Valley. We all want to be high agency people. Is that right, Keith? When you go to cocktail parties in Palo Alto, do people introduce themselves as high agency?
You know, we used to call it A-type personalities, if you remember that, Andrew. And Silicon Valley is full of A-type personalities who believe that the things that they think about are important for everyone.
as should every human being, otherwise we become passive acceptors of the life that we're given. So I do think the Meta case is kind of interesting in that regard because what that case said is that the girl who was depressed was a victim of social media and didn't consider her decision-making to have anything to do with that. It was she was a solely passive victim of of media. And that depicts humans as infants who have no decision making role whatsoever. And that just isn't true. So where does that come from? It really is a kind of a left anti big tech victim culture that is just wrong.
Well, it's not just the left. I mean, I think you're, I agree in part, but it's, it's the right as well. I mean, Steve Bannon is just as, just as given up on agency as, as Bernie Sanders and, and hanging in the, in her New York times piece suggests that this will come to the boil in the age of AI. And this really is the theme of our show. And so many shows is, And maybe that was the point of the AI doc, even if it didn't bring it out. On the one hand, The promise of AI is it's supposed to empower us to turn us all into high agency individuals. And you will always talk about that, Keith. But on the other hand, the big fear is it will take away agency. So I don't know whether the AI revolution is a cause or a consequence. It's a bit of both. But it's brought the issue of agency to the fore. It's really becoming increasingly the political conversations.
Yeah, but that's the misnomer, and it's a shame the movie didn't go into this, but AI, they did acknowledge that it's a big calculating machine. That's what AI is. It's a calculating machine.
Yeah, and in order to trigger a calculation, you have to ask it something. And when you ask it something, you can provide context in the form of files known as markdown files, For example, OpenClaw has a file called soul, S-O-U-L dot markdown, and that gives the AI its personality. You describe it. So agency is not given up. Agency is in the hands of the human, and it shapes the AI completely, 100%. The AI can't do anything outside of the context you give it. And by the way, each session is unique. And so you're starting from scratch every time, a little bit like Groundhog Day. And so the idea that AI is this thing that lives outside of you is just wrong.
Yeah, and it comes back to the McLuhan comment, we shape our tools and thereafter they shape us. We shape our AIs and they do shape us, but given that we shaped them in the first place, we get what we deserve, Keith, I guess.
will never be as clever as AI. It's the first generation where new children will not be, even taken collectively on the whole world, won't be as clever as AI. And that creates this sense of AI as a separate thing as opposed to a tool that we control. And I think in that question, all of the fears live.
Yeah, and it's also culturally a movie about the Daniel Roher generation of obsessive parents. So it's an interesting film, interesting conversation. I think what it speaks of, Keith, perhaps as a conclusion, is that in our age of AI, we need a strong sense of the self. I think that that's what... Ezra Klein is seeing new in San Francisco, unless we have strong senses of the self, then we will indeed be lost. But if we have the strong sense of the self, we can push these AIs around and shape them as we want. Is that fair?
Well, that was an unfair conclusion, Keith. I thought we liked to disagree. But anyway, excellent conversation. Next week, we will no doubt talk about the newest of new things, AI. Thank you, Keith.
It started with a simple game, but the AI grew smarter, faster, until it didn't need us anymore. That's when the danger truly began. And that, my friends, is where our tale ends for tonight. Thank you, John. Excellent, as always. My pleasure.