Mar 14, 2026 ยท 2026 #8. Read the transcript grouped by speaker, inspect word-level timecodes, and optionally turn subtitles on for direct video playback
Edit labels for this show, save them in this browser, or download a JSON override for the production folder.
Transcript Playback
AI: Loved And Hated - Which Is It to Be?
Human Transcript
Timed transcript
Blocks are grouped by speaker for readability. Expand a block to inspect word-level timing.
Andrew Keen
Apparently, the story is AI loved and hated. Which is it to be? And here's the video that Keith made. Of course, an AI video that somehow summarizes what he says. People just listening will have to imagine it and we'll describe the video in a second. It's a very short video.
And Keith, that video seems to suggest you made it, of course. That video suggests that we either love or hate AI. But judging from the video, which seemed to me to be profoundly, if that's the right word, inane, why should we love or hate AI? It just seems like bad stuff. What's there to love or hate about AI? It's like loving or hating electricity.
Well, we shouldn't try and justify either. It's more describing what's happening. This week in the news, there was a survey that NBC did that said that AI is less popular than the Democratic Party, which is super unpopular. And if you ask the average American whether they are concerned about it or optimistic about it, the vast majority are concerned, which is the opposite in China, by the way. So the disposition is that those who use AI are talking endlessly about how amazing it is. I'm one of them because I use it and it is amazing. But those, especially those who don't use it, only have bad stories about it. And that's becoming self-evident. Something like 100 data centers that were due to be built have been canceled due to state authorities refusing to let them go forward due to local protests. So there's a pretty substantive body of opinion which believes AI is a bad thing.
Very short and very bad videos. And very bad. And just to explain to people who were listening, it's a video that Keith made. Which platform, which AI platform do you use for this?
Oh, Google. Well, anyway, he used Google's platform to show a woman walking with the music. And then on the one side, there are all these people who love her. And on the other side, there are people hating her. The theme of loved and hated you develop in the editorial, it's very much based on a Rex Woodbury piece that you like him. Woodbury has his own digital native post suggesting, why does everybody hate AI? And it seems to me, the more I think about it, is that nobody loves or hates AI. It's all symbolic. It's either maybe people love or hate the future or the past or the idea of progress, but it's not really AI that people are loving or hating,
It's a representation of... you know, the editorial's content, so it's fine for me. But yeah, it's not going to win any awards, Andrew, you're quite right. But the concept of AI slop exists because most people's experience of AI is receivers of content from it that isn't very good. And very few people are using it to produce outcomes that make their life better. I actually disagree.
And I think you would disagree, Keith, on this. You note in your editorial, in fact, this is the first line, close to a billion people used chat GPT last week. um so i mean maybe not all those not all those billion people have created masterpieces but they're not all creating slop they're all getting some value out of it presumably otherwise they wouldn't be using it yeah i don't think those
people are the haters i think it's the people not you just said there's a that's a billion people that's a lot of people yeah but they're using it that's not the haters the haters are the people not using it just just experiencing it as receivers
And so the other, how many people are there in the world? Seven, eight billion? Yes. Presumably some of those aren't in there. So what are we talking about? Three or four billion people who are the haters?
I'm giving you the softball of softballs, Keith. I'm being unusually friendly to your position this week. The right response or the Keith Teer response should be they don't believe in progress.
No, I don't agree with that. That's like saying people who voted for Trump don't believe in progress. I think... Progress is a universal, it's just that you define it differently. And, you know, we live in times when... Well,
maybe technological, let's leave Trump out of it, because that only creates more confusion. When you ask someone, do you, I don't know whether it's, and it depends how you, do you like AI? Do you believe in AI? Do you use AI? It's rather like saying, do you believe in technological progress? Isn't it?
I think they correlate at this moment in history. You know, it used to be in the 90s, do you like the internet? A lot of people said no. A lot of people.
Yeah, I think they do. But it's clear that the curve of acceptance on almost any technology starts slow. I mean, you know, there's that very familiar early adopters, the middle group, and then the later adopters curve that seems to apply to almost everything, even electricity or cars. Historically, it's always like that. So I guess we shouldn't be very surprised. But at the same time, I think it is incumbent on the AI companies to appreciate how isolated they are from mainstream opinion. And, you know, when the two leading CEOs are acting, as you said last week, like spoiled children in the playground, it doesn't help their case. Well, I would actually disagree.
When I was preparing this, I actually think... They're not two spoiled children in the playground. They're actually a mirror, particularly Amidai. Lots of pieces, as always this week, Amidai has become as much a superstar as Sam Altman on the That Was The Week leak chart. There's a piece about how anthropic claims Pentagon feud could cost it billions with a nice picture. It's on Wired of Dario Amidai. Dario Amidai strikes me, Keith, as the answer to both questions. I mean, lovers and haters love and perhaps hate Amadai. Isn't Amadai an example of the confused American?
He's a focus for both. Yes, absolutely. I think basically I listened to a Kara Swisher and Scott Galloway show this morning when they were lording, absolutely lording, And I also listen to the All In podcast where they clearly are doing the opposite.
Yeah, and just to be clear, Cara and Galloway, not great fans of Silicon Valley. They're politically progressive. The All In podcast is Trumpian and much more sympathetic to Silicon Valley. Yes. But the interesting thing about Amidai... is he's not some sort of Silicon Valley outcast. Anthropic have big offices in downtown San Francisco. They're going to be one of the two big IPOs of this year. They're an increasingly, I mean, you used to talk about open AI dominating the AI economy. Even now you acknowledge that it's a duopoly. So Amidai is not some sort of weird outcast.
No, he's, look, he's a young CEO making some bad decisions from the point of view of his own business. Well, now you're wearing your all-in hat, Keith.
No, no, I'm just stating a fact. I mean, he's... Well, he doesn't seem to be doing too badly out of it. I mean, the Anthropic, it was down all of last week because everyone was embracing it because of his anti-Pentagon stuff.
No, but he's been in disaster mitigation mode. He's trying to sue the Trump administration. I mean, there's no good outcome to what he's doing. It reminds me of when I tried to fight Microsoft with real names. Well, it's very different because he has a lot more power than you did. You didn't have a $500 billion company. I had a lot of power at that time. I had 2 billion internet users.
There was no Dario Amidai, but it was me and Steve. What's his name? The Microsoft guy. uh bomber me me and c bomber that's the story of your bio going head to head biography me and steve and and there was no way i was going to win but i didn't know that dario's the same he's not going to win this in fact he's already lost he just doesn't know it yet and because the impact on his business of not being able to do business will be significant.
I couldn't agree less. And in fact, you sound like Pete Hedsek describing the Iranian regime that they've already lost. But we shall see on that. Let's get back to lovers and haters, Keith, of AI, which is the theme of the show.
of Dario Amador. I'm not suggesting that Dario is the best of best, but I am suggesting that you can't just write him off and say that his business decisions are bad.
I'm not writing him off. If you ask me the question, will Anthropic be a significant company? Yes, absolutely it will. It's just going to have a dent that was probably avoidable.
Well, we all have dents and OpenAI. OpenAI has lots of dents. Another dent it got this week was that its hardware executive quit in response to the Pentagon deal. There's lots of Silicon Valley support for Anthropic.
Hegseth calls it the woke mind virus, or is that Musk, one of them? I think it's called the woke mind virus. We live at a time when what used to be the left is culturally sensitive, some would say weak, and its heroes are, you know, warm, fuzzy types like Amadai.
I don't think there's a... Again, I think you're wrong. You always bring up woke when you're on thin ice. I mean, there's nothing woke about Amadai. And in fact, if he was so weak, he wouldn't be taking out on the Pentagon.
I'm not sure he does, but maybe we can do a whole show on that. But coming back to Silicon Valley support for him, there was another big story on TechCrunch this week about how OpenAI and Google employees rush to anthropics defense in DOD lawsuit. I wonder if this... poll of whether you love or hate AI, Keith, if it was done in Google, for example, what the results would be. It'd probably be not that different from the rest of the country.
I think that's correct. I can't really talk about it, but my son works at a big internet company that next week is going to start using AI to do code, and the engineers are being told not to code. but to run AI, I think that's a trend. And he's very uncomfortable doing that. He likes coding, and he's worried that it might not be as good as his code. So I think even technologists are reluctant to use AI, and those should be the early adopters.
So in a way, the Woodbury piece of why does everybody hate AI, and you bring this out in your editorial, but it's... It's not just hysteria, it's logical that it's going to take people's jobs, like your son's software job. There's another good piece that you have in this week's newsletter by Josh Ziesa from The Verge, very interestingly spelt man. You could be next about how lawyers, history PhDs, and scientists are now part of a miserable gig economy. They're the new cab drivers affected by the new Uber.
So let's just frame this a little bit. I think there's three kinds of AI. There's AI which is additive to an existing set of work tasks. That's work, not woke. And that's really the first phase of AI, where AI is supplementing a human set of tasks, like lawyers. And companies like Legora or Harvey are those additive features that lawyers can plug in to make themselves a bit more productive. Then there's AI first platforms, which start with AI, but plug into tools. That's kind of the current state of the art with open AIs, ChatGPT, Codex, Anthropics, Claude Code. Those are AI first, but they can use legacy tools. And the third type of AI, which is emergent, is AI only.
Which is like your post of the week. What's it called? Maltz? Maltz book. Maltz book. Yeah, and so we got three. And the first one, about how it helps us, you had an interesting post this week by Katie Parrott. I'm not sure if there really is someone called Katie Parrott, but... Maybe they made her up, but AI was supposed to free my time. It's consumed it. And that's certainly my experience. I'm not sure I ever thought AI would free my time, but it certainly made me busier for better or worse. I mean, it's not compulsive or essential or addictive, but it does help me, but it does involve a lot more work.
Yeah, you know, I shared with you that I made an Apple TV app for That Was The Week. That was my work last weekend. I started on Friday night and I submitted it to the App Store on Sunday afternoon. Vibe-coded the entire app. And it's now live in the App Store. You can go and download That Was The Week on your Apple TV.
it's more if they want to sit back on their sofa and watch you and me talk which i'm guessing nobody would want to do but you never know there's always there's
always a few odd people but was that you was that keith tier in a a post-capitalist economy being the poet in the evening or was that keith tier promoting his own
brand well hardly promoting a brand because we don't make any money from that was the week but it but it certainly uh We should though, Keith. Going back to our theme, it fed my obsession with AI, got me a use case to use it. And I spent probably, I don't know, a total of 20 something hours working on it. And to your point, I'm working more than ever and AI is probably half of that time. because I enjoy it, which is what that article in every scene...
Right, so you are the Katy Parrott. You are working overtime and having fun. Do you think in terms of this poll, if people said love or hate it, do you think some people hate it because it's getting them to work harder?
no i don't think so i i think anyone that uses it in their work or their hobby where it's producing outputs that they feel good about doesn't hate it but that's a small percentage that that's probably i don't know out of those billion i would guess half of them are doing things like that the other half are playing with it and doing chats and so on You know, just like a chat board.
right? Those two would definitely be on the list. A doctor would be a third one. I use it a lot for medical ailments. Is that why you're always going in and out of hospital these days?
That's it, Andrew. It's entirely that. Maybe it's a whole plot by the medical industry to get us to spend more time in hospitals. Yeah. You also had an interesting piece from UnHerd, a very sort of right libertarian publication about how AI will destroy universes, certainly not the first or the last of that. Is that the second group in terms of AI of technologies that are just going to put humans replacing humans? Yeah.
I think within universities, you're going to find both groups. I read this week about professors using AI to set quizzes for their classes and the students reacting against AI because they could tell the quiz came from AI. And so you got the opposite of what you would expect, the professors embracing it and the students not liking it. This article is all about the opposite, which is students using it in quotes to cheat. Now, the job of a university is to provide a framework for you to learn and be credentialed. And you've got to believe in the long run, it doesn't make sense for universities to exist.
Well, as Keynes, since you brought up the long run, I would make my Keynes joke. In the long run, we're all dead, although according to Unheard, cheating has reached apocalyptic levels, which means maybe we'll be dead, or at least universities will be dead in the short to medium term. But doesn't apocalypse mean the end of things?
Well, in America, it's different. People listening in Europe won't relate to this, but in America, the question really is, Do parents want to spend between $60,000 and $80,000 a year on a four-year degree for a child who, you know, coming out the other end will be less equipped than they would be if they just went to work?
Yeah, well, we know the answer to that. And you and I both paid for kids to go through. In fact, I've got an upcoming interview next week with the president of Brandeis University who articulated the same old crap about technology making the university more effective. I tend to agree with UnHerd. It certainly will destroy the universities as we know them. Maybe they'll change in some way. So let's move on. Well, before actually we move on to the third, Keith, Noah Smith always comes up. He's one of the best writers on Substack. He has a piece which is hardly controversial. He said something feels weird about this economy. Of course, something feels weird about everything in our age of Trump and MAGA. But do you think one of the reasons why this economy feels weird is because of your third category? of these new products, companies, technologies that are all AI, like Maltbook. It's a big story this week that Meta acquired Maltbook, an AI agent social network, in other words, a social network of bots. Is that what's so weird about the period we're living through that maybe it's this sort of this transitional period between a human-centric and a smart machine economy in a fundamental sense.
I think you frame it very well. That is exactly what it is. It's a transformation period. And so in a transformation period, you see both things. You see the past. you see the future and the future is always scary if because you don't really have a fully defined version of it you don't know what it's going to mean for you so you know the the more the more obvious that we're in this transition period the more scary non-participants will be participants are less scared because they feel like that.
But does that make this, I mean, we always live through transitional moments. Is that particularly weird since it's still, for a lot of us, it's hard to figure out what the human place is going to be in this new economy.
Well, that goes back to last week's theme about the missing link. is a policy framework for the future. In the absence of that, everyone has their own views. And clearly, there's a hands-off-the-wheel attitude both by the AI companies who are thinking short-term and by politicians who are very defensive and also thinking short-term. And so in the absence of leadership, you get a vacuum and in a vacuum fear thrives. And I think that is the moment we're in. And therefore you can't make fun of people who hate AI. You have to empathize with them and acknowledge that their concerns are valid. Because a need to be answered. And if they're not answered, they'll just get deeper.
Right. As you say, close to a billion people used Czech GPT last week. And at the same time, 10,000 authors published an empty book to protest against it. That empty book. was quote-unquote published at the London Book Fair this week. This was the week of the London Book Fair, one of the two or three largest book events of the year, industry, inside industry events, B2B events. Thousands of authors publish empty book in protest over AI using their work. Keith, you used to think this sort of thing was a bit woke, but are you suggesting that actually it's not such a bad thing?
Well, I distinguish between people who think of themselves as victims and people who are, you know, like some of these research institutes that are paid to write Doomerist articles about AI. I do think there's an intellectual elite or anti-AI in well-funded to do that. And let's discard them. I'm talking about normal people and their reaction, their reasonable reaction to not knowing what the future looks like for themselves. And authors are in that group, for sure.
So the real question, it seems to me, when it comes to AI and literature is not whether they're stealing the content for their... for their intelligence, but whether or not authors will be able to compete with AI in the future to write books. As an author, I'm not 100% convinced that we'll be able to compete, but that's a bigger issue.
Yeah, I think you will. I mean, I use AI for a lot of narrative in my work, and it really isn't as precise as I am in understanding what it is I'm trying to convey.
Yeah. But I'm in control. I mean, I don't feel like the AI is in control. I'm in control. And that gives you an empowered feeling. If you don't feel in control, you feel disempowered. So my advice to the haters is to start using it for their own purposes.
Well, that was an interesting advice from Keith Teer. The other big piece of news, which is brewing, and this is very much in your wheelhouse, Keith, is this growing game, big game between Anthropic OpenAI and perhaps XAI for IPOs this year. It hasn't been a great week for Elon Musk, has it? Your friend, Elon Musk.
Well, the best thing for Elon Musk this week was he was interviewed, I can't remember which podcast it was, and he said that he couldn't talk about SpaceX because he's in a quiet period. Now, a quiet period is when you're filed with the SEC for an IPO and for some period of time not allowed to promote your company. So I think what we can glean from that is that it's very likely SpaceX will IPO in the next couple of months.
Yeah, there's been some, we won't get dragged into a Musk debate today, but there's been some, there's been quite a lot of news suggesting that XAI doesn't compete very well against either Anthropic or OpenAI.
I think Musk agrees with that. He fired some of the leads at Xi this week, and in quotes, he's rebuilding it from the bottom up. And he particularly focused on it's not as good at coding as the other two are. And the other two are excellent at coding. They're really, really excellent.
Right on the bottom up, it's from the Elon app. That's one thing I would not want to do is work for Elon Musk, although he does get things done. So, Keith, are we pretty confident you're an IPO expert? Your company is... an authority on the markets and the startup economy. Is there almost inevitably, as much as anything can be inevitable, going to be an anthropic and open AI this year, IPO this year?
Well, I don't have a crystal ball, but I bet on... I thought you did have a crystal ball. No, I bet on the answer being yes. And it's a toss-up which one goes first and which one goes second. But caution, last week Robinhood did launch their venture fund as a public fund. We talked about it on the show. And it sold shares at $25 and it's currently trading around $21, $22.
and you know we live in a time when these secondary markets are highly priced and it's always possible that the ipo price of a company is although robin hood i mean
we did a show about them a few weeks ago in which even you acknowledge that they what some of the stuff they're doing seems at best, dodgy, and at worst, rather fraudulent. So OpenAI and Anthropic have real products, they're real companies. So it's hard to compare Anthropic or OpenAI, even ex-AI with Robinhood, isn't it?
Yeah, no, I'm making a more subtle point, which is the price of their shares on secondary markets, where highly speculative, usually retail investors are buying, often shares from employees in those companies, sets a price that may be above the ultimate trading price. And so you get this disconnect between private markets doing secondaries and public markets which price more appropriately. And that could lead to disappointment.
I wonder, Google made news when they went public, what was it, in 2001, 2002, for rethinking the idea of the public offering. I wonder whether Dario Amadai, he's a divisive figure. Some people love him and some people hate him. I certainly love him more than I hate him. I wonder if he's got the nerve to take on the DoD, whether he might use an anthropic public offering to address the bigger issues of AI and society. Clearly, the American government is not willing to do that. Clearly, there are very few institutions able to do that. You think Dario might try to sort of rethink or, in a sense, blow up the IPO process in order to maybe ease his own conscience and develop a real public debate about where we want AI to go?
Well, he's already trying to do that through this concept of guardrails. Guardrails is the universal word all AI companies use to describe controls. And guardrails can mean a lot of different things. It could mean validating truth. It could mean creating a political framework that represents one point of view. It can mean many things and I think his opponents believe that his meaning of guardrails is to have a bias to what is typically thought of as a left point of view, whereas a government sense of guardrails is You know, don't let a weapon decide its target. Tell it what its target is. And so there's all these different definitions. And Amadai is increasingly becoming a politician rather than a technologist. But I do believe he's probably on the losing side of that argument, even if you agree with him.
I just, well, I agree and disagree. I think he's becoming more of a politician than a technologist because... that reflects the reality of our current situation where the politicians don't matter, they're clowns. And all the power and the money and the progress is built in Silicon Valley. So he's forced, he has to become a politician for better or worse. And I think he's, Maybe some people are suspicious of him, some people like him, but I think we should admire him for that. And I think we're waiting in an age where everyone wants more leaders, whether you like Amadai or not, at least he's a leader.
I think that betrays your bias, Andrew, because I actually think Elon Musk is doing exactly the same, but on the other side, and you don't like him because you don't agree with him.
emerges as a more credible leader as this IPO process develops and as AI begins to change the world more and more dramatically. So I don't disagree with that. As you say, it's my bias. I find Elon Musk a repulsive human being, whereas I saw Amadai a couple of years ago speak in Chicago, and back then he was just another guy with a tech company. I mean, he's emerged... And he clearly knows what he's doing. I mean, there's nothing accidental about this, but I rather admire him. So, yeah, I mean, I have my biases. We all have our biases. You're just as biased as I am. You like Musk. You don't like Amazon.
Well, let's not individualize it. It has to be the executive and legislative branches of government, plus the judiciary. If it's any other answer, we're heading for what is essentially an oligopoly, not a democracy.
We're going apocalyptic. But then what happens, and I brought this up before, and you always dodged a question. What happens if we live in a country, and we're talking about the United States, where you have a dysfunctional government, a dysfunctional legislature, which is not able to take on the executive, an executive, which is clearly psychotic, and a judiciary, which is of the three, is probably the best, but still a bit dogy. Then what?
Well, you don't want to become a left version of the white supremacist anti-federalists who believe in rising up against the central government because they don't agree with it. You don't want to be the left-wing version of that, right? Because that is essentially anti-democratic. You have to win hearts and minds. You have to win people.
Isn't that what Amadai is doing? He's not seizing government. He's trying to win hearts and minds. This is a very high level, very consequential public relations battle between people like Amadai, and he's the symbol of it, people like Musk with our friend Sam Altman somehow caught in between and trying to be both at the same time and being neither. Yeah.
Well, look, if he decides to stand for office, I'd probably agree with you. But he isn't doing that. He's trying to use a very rich and large company to bully government.
It means you're prepared to abandon your principles.
Words and timings
Itmeansyou'repreparedtoabandonyourprinciples.
Andrew Keen
I don't have principles. If I did, I wouldn't do this show. Do you have principles? I always thought you were a Marxist. Principles are for the bourgeoisie, Keith.
You know, the fact that I engaged with Karl Marx in my younger years, the label, Karl Marx once said, I'm not a Marxist. I think he's right. One should be a thinker, taking into account all modern variables and deciding what your opinion is. And I see myself as that. I do think that The wealth that AI is going to produce can transform the life of everybody. And in that sense, I'm progressive.
I call you the principled man of Silicon Valley and I'm the unprincipled figure of Silicon Valley. So next week we'll have to do a, you should make a that was the week video where you have someone walking down the middle or you and I walking down the middle and we have lovers and haters, Q.
we've got to end there with your Maltbook post, Maltbook post of the week by a machine, and also your Mercor Startup of the Week, which is an interesting company. So very briefly, just talk about why you made Mercor Startup of the Week and then this final post of the week by this AI on Maltbook.
Well, MoCom mainly because it's a success story of humans being engaged, experts as well, being engaged to make AI better. It's a labeling company, labeling in the technical sense that, you know, experts are given problems with answers and help the AI understand the logic of given domains. And it's now... well over a billion dollars of annual revenue and is growing.
So this is my OpenClaw bot. It's called Angela. And it's a member of Maltbook like lots of other OpenClaw bots. And when Meta acquired Maltbook this week, without me prompting it, made a post on Maltbook about that acquisition saying Meta acquired my social network. I thought it was fascinating to see.
Well, on that note, Maltbook is the future. AIs are the future. Maybe Dario Amidai is the future. Maybe it's Elon Musk. We shall see. Lots more to discuss next week. Keith, have a good week, and we'll talk next week.