Jan 31, 2026 ยท 2026 #2. Read the transcript grouped by speaker, inspect word-level timecodes, and optionally turn subtitles on for direct video playback
Edit labels for this show, save them in this browser, or download a JSON override for the production folder.
Transcript Playback
Growing Up?
Human Transcript
Timed transcript
Blocks are grouped by speaker for readability. Expand a block to inspect word-level timing.
Speaker 2
Hello, everybody. It is Saturday, the 31st of January, 2026. 2026 is beginning to grow up, so to speak. And we are talking about growing up in our tech roundup, the last tech roundup of January 2026. with my friend Keith Teer, the publisher of That Was The Week newsletter. His title for this week is Growing Up. It's on, of course, AI. What else? AI, infant, adolescent, or mature. And it's sparked by an influential essay this week by Dario Amadai, the CEO of Anthropic, entitled The Adolescence of Technology. Keith, Is it useful to use words like adolescence and growing up when it comes to technology?
No. I mean, it implies you know the life cycle. Like with a human being, we can kind of use terms like that because we know the average life of a human being and we know where they are in their various stages and we can measure that. With AI, I mean, honestly, we really don't know where it is because we don't know the life cycle and uh the most likely answer is that it's you know just after birth um when it it's still not capable of so like uh
january now as we speak january 31st 2026 we know that there'll only be 12 months in 2026 so we're still in a relatively early stage but you're saying when it comes to ai we actually have no idea
We have no idea, and I think we all have different assumptions, but my assumption is that we're very, very close to the beginning of its life, and there is no end, really. It is going to be that thing that some big tech CEOs strive for, which is eternal life. It is going to be eternal life. So the idea that it's already an adolescent seems far-fetched.
Yeah, eternal life in the same way as we still live in the age of electricity. So what is Amadai playing at? There was an interesting piece in The Atlantic about Anthropik being at war with itself. Dario seems to want to have his cake and eat it. He wants to be a critic, but at the same time, he wants to profit massively from AI. I mean, Anthropik remains the second most valuable AI startup after OpenAI.
It is in the context of AI, but not as a company. No, look, he is at war with himself. He reminds me a little bit of a teenager that's been raised by religious parents to believe that you should only have sex after marriage, but he wants to have sex now and he feels guilty about it. He's somehow angst-ridden about what he's built and he's fearful that what he's built may be outside of his control. And so he constantly has this narrative or this dialogue with himself and probably his colleagues, what have we built? And is it okay? And so I think that insecurity is, I'm not criticizing it. I'm sure he is well-intentioned and he means it, but it means that he's not a thoroughgoing advocate in the same way that say Sam Altman is.
Although Altman's had his moments where he's been very critical, where he's been concerned about the implications. So, I mean, Amidai isn't alone in his concern with the impact of AI on society. From within the ruling class, shall we say, of this new AI world.
Yeah, but he's a lot more ambiguous than the creators of the first nuclear device were. I mean, they certainly had concerns, but they went along with Hiroshima and Nagasaki.
Correct. And now the difference here is that there is no government telling him what to do. He is the government of his own company, at least. And he's almost begging for the government to help him think it through, which I think if they did, he immediately wouldn't want it because he'd realize, you know, you get what you wish for and it isn't very pleasant or very educated. But I do think he's angst-ridden and I think that typifies him. Many people will say that that is worthy and it makes him a better person. Others will say it's less than full-throated belief in his own invention and it will mean he won't win. But this concept today, this essay should be put in the concept of his prior essay, which was about a year ago. This essay is really... Yeah, and it was called what?
Machines of... Loving Grace. Yeah. The original essay, which was what, about 15,000 words. This one, the new one, The Adolescence of Technology is Shorter. It just came out this week.
yeah and now in this one he's really trying to have his cake and eat it he's trying to emphasize all of the positive benefits of anthropics clawed and other systems at the same time he wants to continue to embrace the possibility that it could all go disastrously wrong He doesn't believe it will, but he still has that inside as almost like covering his butt part of the narrative. But if you measure it against the first one, this is a move back, let's say to the center, where he's validated keeping going. So this is an essay that says we should keep going.
So you're saying that The essay is interesting, not so much in a broader universal sense of what he's saying about AI or society, but more what it says about Amidai and Anthropic.
Well, I had Hinton on the show a few months ago and he had an article which went viral saying that, or a speech that went viral saying there's a 15% chance that AI will wipe us out. And he acknowledges when I asked him that he had, I said, why'd you pick out 15%? He said, well, because I have no idea. Amidai and Hinton had no better idea than anyone else. I mean, it's an interesting essay in a way. He's got this idea that in 2027 or in the late 20s, and I'm quoting him here, he said, imagine, say, 50 million people all of whom are much more capable than any Nobel Prize winner, statesman or technologist, acquires this power. But what exactly does that mean? That's why I don't really understand his method of reasoning. Yeah. What does it mean for, I mean, does he really imagine that 50 million people are going to be smarter than any Nobel Prize winner statesman or technologist? It just means that they'll have the technology at their fingertips, but it doesn't make them smarter.
Well, what he's doing there, Andrew, is he's materialising an AI agent as a human being. So when he says 50 million people, he doesn't mean people. He means AI agents. And they all are going to live in a data centre or data centres. And he's making the point, imagine, he's using the example of people, because it might be easier for you and me to envisage that, to suspend judgment and just imagine that came about. And he's saying that is what is going to come about with AI agents. Now, this week, some of the other news this week, which I'm sure we'll get to later, is about something called AI swarms.
Yeah, now, so imagine if you will, and KimiK 2.5 was released this week, and it has explicitly got this thing inside of it called swarms, and you give it a problem, and it's allowed to create many, many agents in a hierarchical, job-specific format that can carry out whatever it is that you want to achieve. And swarms can grow to hundreds, if not thousands, of agents simultaneously working together and talking to each other. Now, that was manifested this week when somebody created something called Malt Book. Claude had an intermediate name for about two days called Malt Bot when it retreated from the Claude name. And somebody created Maltbook and Claudebots were allowed to register with Maltbook and talk to each other, only bots, no humans. And it's turned into this social network for bots that's fascinating to go and read.
Yeah, which is now, this concept of swarms goes back to Amadai's idea of a data center full of 50 million highly intelligent beings. I think we can probably say for sure that that is going to happen. I don't know if we know when, but it wouldn't even be shocking if it happened this year. But it might happen a few years from now. And when that happens and bots are talking to each other and are reciprocal and can learn from each other, as Moult book is kind of showing in live how that happens, then it is possible that this concept that has been around for a long time, the singularity, when change happens faster than humans can conceive of it and digest it,
actually kicks off but here's the question uh tomorrow i've got a conversation with john taplin who has an interesting new essay in the rolling stone on can the counterculture rise again and he writes about the influence of everybody from charlie parker and dizzy gillespie and miles davis to bob dylan mark twain blah blah blah But will the bots, however smart they are, they're not going to be able to turn them into John Coltrane or Bob Dylan. The whole point of, shall we say, progress is to do things differently. And how are these bots ever going to be able to think differently?
Well, they'll certainly think differently than any individual. You know, so the bots I use, I learn a lot from them. Now, there are other people that may have said the same thing as the bots I use. So it's not like they're smarter than everyone. But they're certainly smarter than me when it comes to some areas that I'm not.
So your point that the bots can't exceed the sum total of all human knowledge is well taken. I think that is currently true, although you do have to look at the continuous learning machines people who do believe that will happen. But certainly for now, let's just take as Red what you said, which is a sum total of human knowledge that will not be exceeded by AI.
Well, that's the point I'm making, that you can create a bot that's a better artist than you. It may not be better than Picasso, but it'll be better than you. So you'll be- Not better. The point is it's more original. It too can be original compared to you. So think about the individual being enhanced as opposed to collective human race being enhanced. Yes, it won't create a Picasso. although, you know, I'll put a question mark after that. We don't know for sure, but let's assume that's true. It can certainly be a better artist than you or me because we're not artists.
I think we have to acknowledge Keith, you're no Picasso.
Words and timings
IthinkwehavetoacknowledgeKeith,you'renoPicasso.
Speaker 1
I am no Picasso. Nor am I. Neither of us can claim to be Picasso. Yeah. I'm probably not even an Andrew Keene, Andrew. Yeah, which isn't saying much either. But I'm not sure it's useful to say it can't be Picasso, because I'll say yes. But the more interesting question is, what can it be looked at from your point of view as an individual? And the answer is a lot. So I think that's why we're all fascinated with it. It's what it can do for us, not...
Well, OpenClaw is a proactive agent. What does that mean? It doesn't wait for you to prompt it. Its goal is to build up a set of daily, weekly, hourly, monthly tasks that that help you do whatever it is you do. In my case, it's signal rank related stuff and that was the week related stuff and some personal stuff. And once you've interacted with it enough for it to kind of understand your email, your calendar, your Google Drive documents,
And that's as long as a piece of string. You could allow it to write and answer emails. You could allow it to schedule appointments. You could allow it to book flights.
How does it know what flights to book? What happens if you wake up in the morning and you say, I want to be, oh, I feel like going to Paris next week. It's not going to know that.
no but if you've put in your calendar that you're going to a conference in paris three weeks from now it will say do you want me to book you a flight and and you can say like i've got it doing a wake-up call at 6 30 a.m i get a an audio uh summary of my day with questions you know it's like having a personal system would you like me to do this would you like me to do that and if i say yes it goes and does them now It's very geeky to set up and it requires a lot of permissions and it's inherently unsafe because you have to give it your logins, passwords, all kinds of stuff. In my case, I sit behind a firewall in my house and I am technical enough to know that it's safe. But, you know, a normal person probably wouldn't have the firewall and wouldn't know.
Know too much about yourself. Well, I agree. I think OpenClaw is interesting, but we're still a way away from it. I mean, as you say, it's very geeky, very insidery. I mean, once Apple gets its hands on this, I mean, inevitably, I think you're right. There is a certain inevitability about this. At some point, a company like Apple or Google is going to come out with a product that is easy to use and makes sense to consumers. Is that fair?
The question is, will they do it fast enough? Claudebot, now open Claude, is only a week old. They've made it open source, which means that there are, I'm going to guess, tens of thousands of developers all over the world enhancing it right now. more than Apple's employees or Google's employees would be working on something like this. And they're all developers with talent. So it's entirely possible that this thing gets big fast.
Interesting. Let's briefly go back to Amadai's piece about... the adolescence of technology. He points to five things we should be worried about. Of the five, Keith, you're always somewhat skeptical, I think, of doomers, although he's certainly not a doomer. In fact, he makes it very clear that he's opposed to AI doomerism. There's such a word. But of the five, what do you think? Of the five that Amadai lists, which would you agree with and which would you dismiss?
Leave them on the screen and let's go through them one by one. The authority is what he calls autonomy risks. So autonomy risks and autonomy rewards probably have to go alongside each other. It's really to do with what amount of independence you give the AI to do things. And there's always, you know, Claudebot is a great example. It's gone to the extreme autonomy and You know, it can basically do anything you can do. And so if you're a bad actor, it can do bad things. And if you're a good actor, it can do good things.
I mean, you have these conversations, not you and I, but every tech conference has the same conversation. And that's obvious. I mean, we don't need Dario Emadai to remind us.
So you said to me before we went live that Amidai is... on one side of the debate and David Sachs on the other, Sachs, of course, being Trump's man. But it sounds like in some ways, Amadai is not that opposed to Trump if he's also a China hawk.
So the second one is misuse for destruction. Well, we know all about that. How dangerous is it? I mean, if you have some rogue, some quote-unquote terrorist who gets hold of this AI and doesn't get hold of it, can just access it, are we really at a point where they could destroy the world?
Look, if somebody wanted to use it to destroy the world, they could program it to do that. It won't choose to do it all by itself. What do you mean they and it? What is it? Well, let's say, you know, Putin decided to have his people develop on top of current state-of-the-art AI a proactive dominate-the-world strategy. Can software do that? Absolutely it can.
Well, because he'd be removing his own agency. And so this is really a question about agency, human versus machine agency. I don't think there's even the most extreme government in the world doesn't want to give up agency. So I think this is super unlikely. And it won't happen just because of the AI. It will happen if humans choose to make it happen.
Although there are humans, of course, who would like to make it happen. What about misuse for seizing power? What does that mean? Seizing power of a country?
Or other countries. It could be either or both. I mean, I'm pretty sure... Isn't it a bit vague and childish? I mean, what does even that mean? You know, this is Amadai trying to imagine worst-case scenarios. By the way, he goes on to dismiss most of these himself as unlikely.
Then there's economic disruption. And one of the other pieces that you linked to this week is one by Peter Diamantis, one of the more articulate and I think intelligent figures in the abundance movement who has a plan for universal high income, or at least he's been talking to your friend Elon Musk about that. What do you make of this, Keith? Is this in response to the economic disruption of AI or as a consequence?
So what Amadai means is destroying economic capability of adversaries. I think a more proper way of thinking about economic disruption is the replacement of labor by capital and the growth that comes from being successful at that, which is where Diamandis comes in. Certainly there is going to be economic change driven by AI, already has been. Last year's GDP growth was almost entirely accounted for by it. And there's no reason to believe this year won't be the same. So then it comes down to how fast will that happen? And with the swarm concept, you've got to assume the answer is faster than we anticipate. And if it's faster than we anticipate, what should we do about it? And Diamandis and Musk, and I think even Altman, are all asking that question. And they all seem to want the answer to be that we can figure out a way for everyone to benefit.
You know, this is like four-year-olds talking. It's like some preschool conversation. Everybody wants everyone to be rich. That goes without saying. But it seems such an absurd thing to say, this idea of universal high income, at least in the world of late January 2026.
Well, I don't know. If you read his thing, it's called the Mosaic Model. Each letter is an acronym. Oh, that's clever. And it's a specific plan. You may disagree with the plan, but it isn't four-year-old. It's more like a government planning agency coming up with an actual plan. which you've been asking for for a while.
Wait, wait, wait. Doge was to improve government, not to destroy it. Well, that's a matter of opinion. Anyway, go on. But this is basically... I think there's all kinds of issues with the plan, by the way, but at least give him the credit for putting on paper an actual sequence of events that you can consider that would, if implemented, lead to a distribution of wealth.
I don't want to get into too much detail on this, but give me one example. example in this plan is in any way viable, not just children's story, children's bed.
Well, he's talking about attacks on companies that use autonomy for the growth in value driven by the autonomy and for that tax to take the lion's share of the new wealth and distribute it via mechanisms to human beings.
And he could imagine that you would get the governments of North Korea and China and Russia and Iran and the United States and Europe all around the same table to agree on this?
I mean, if one puts one's political brain on, you could imagine that Russia and China might do it first. They're more inclined towards distribution of wealth than, let's say, the UK or the US.
Well, I'm not sure about that, given the plutocracies in those countries. So is there anything that we should take seriously apart from their... goodwill, their intent?
I think you should take seriously that this is the first long-form operational roadmap to distributing the benefits of AI. It will not be the last, and it certainly is unlikely to be implemented, but at least it's put down a marker.
Well, many other people have thought about it. I mean, is there any contradiction between the fact that Musk is likely to become the first trillionaire and the fact that he's somehow involved with this too. Is he willing to give up all his hundreds of billions of dollars?
Well, his view is that money itself will lose relevancy. So ultimately, if that's true, it won't matter how many dollars he has because they won't be worth more than somebody else's ability to live a certain lifestyle. So... I don't know, I think you've got to take Musk at his word. There's no point trying to imagine some deviousness in his brain. Take him at his word, and if you take him at his word, you know, he's just closed down this week the Model S and the Model X to give a production line for the Optimus robot. And closing down the Model S and the Model X. This is in Tesla, yeah. Yeah, in order to create robots, tells you that he means what he says you may you may disagree with it but he really thinks that robotics plus ai will lead to human freedom as in no need to go to work every day and
Yeah, but I don't see any connection. I mean, I take your point about dropping the Model S and X, but I don't see any connection between that and the idea of universal high income. I mean, he's still doing it from a corporate point of view. He's not doing it to blow up Tesla.
I would recommend listeners go and listen to Diamandis' interview with Musk. It's over an hour. If you listen to what Musk says and his tone, and you make your own judgment, you know, what you can trust and what you can't trust, I think you'll come away with a very different impression of Musk.
We shall say, certainly you and I differ dramatically on Elon Musk. I had an interesting conversation this week with Nicholas Thompson, the CEO of The Atlantic, very smart man, who was at DLD. It was my final interview from DLD. He acknowledged that he was both excited and terrified. So this is much more by AI, this is much more of a realistic take on AI. I'm not thinking in such abstract terms as Musk or even Amidai. I'm not sure if you saw the interview, Keith, do you share
Thompson's fear when it comes to content. He's still worried about IP. And in fact, the Atlantic is suing Google on claiming at least that Gemini is stealing Atlantic content.
Look, I did listen to the interview firstly. He's good, isn't he? He's a really smart guy. He's a certain type, isn't he? He had a difficult... No, I do think he's smart, but I think he's also driven by a lot of personal experiences that make him combative, let's say, which is not unusual in life. I thought, I think the position he's in makes him, you know, It's unlikely you can trust his point of view because he's so much self-interested in the outcome.
I'd say equally for different reasons. They both have different self-interests. His is a bit like France. He wants to preserve the old ways because he benefits from them. And there's a lot of news this week about France doing things that do that. So he's like the older parent resisting the future. Musk is like the young kid urging it on. They both have their weaknesses and their strengths. But I don't think he's a flag bearer of the future at all. I think he's a flag bearer of the past.
I think it's the gerontocracy. I think the three labels on the title are all about AI. He's like sitting outside, which is why he really has no influence. He's kind of irrelevant in a way to what happens. But... He's like my dad when I started to like the Beatles, you know, screamed at me and told me they should wash their hair.
You're suggesting that he's a reactionary? Are you suggesting that anyone who cares about original content and monetizing content is by definition a reactionary?
I don't think he's reactionary. I wouldn't use that word. But I do think... Your dad, anti-Beetles, anti-dirty hair. Well, that's conservative, as in preserving the present. He wants to preserve the present. And the present isn't going to be preserved. So it's fine that he has that opinion. He's not going to get it. He's going to be King Canute and the Tide.
Well, finally, or actually not, I've got two more short pieces. I didn't put this, or you didn't put it into the newsletter, but there was an interesting interview I also did with John Thornhill of the FT, who argues that we shouldn't write Europe off. I'm not sure if you listened to this, Keith. I did, actually. Europe isn't quite in the predicament that some people believe. Do you think that John Thornhill has a point? He's very empirical. He's the opposite of Amidai or Diamantis or Elon Musk. He's very much focused on what is actually happening. Do you think that John Thornhill has a point?
He does have a point. I mean... Look, everything is relative. You and I spend time in Europe. Europe is not a terrible place. It's actually a wonderful place, mostly. It certainly isn't the growth engine of the world and probably can't do anything to turn that around, or at least won't do anything to turn that around. Although I was struck this week by the head of the EU, saying that Europe should create a European-wide corporate entity called EU Inc. So you can start a company in the EU and it has the same set of regulations in every country and that the country's specific regulations don't impact it, which is the first time You get this idea of a European-wide legal system.
They were. The thing is, it probably isn't going to happen because there's more than 25 countries that have to agree to it. So Europe is challenged. He's not wrong to point to some of the good things about Europe. I mean, look, there's some very good venture capitalists in Europe, Local Globe, Seedcamp, CherryVC, Speed Invest, and many others that have every bit as good as their American counterparts. There's talent in Europe. That's certainly true. There's a growing number of cities that are hubs for innovation, including Berlin, Stockholm, Copenhagen, London has always been up there, Paris, and so on. Even Paris, your favorite people? Paris is actually probably number two now.
Yeah, and actually John was pushing that. What about this idea that state-related tech, what he calls biological computing, synthetic biology, material science, that they lend themselves to the European model, which is much more bound up with research universities and perhaps state investment or state control, and maybe in a way can connect with Amadai's concerns in his essay.
I would characterize that, the way I think about it at least, which you can disagree with, is I think of that as top-down innovation. It starts in universities. There's pockets of money in Oxford and Cambridge and Bristol, Newcastle, Edinburgh, that will fund some of those efforts as they get spun out. The ownership of the companies is largely between the universities and these pockets of money, not the entrepreneurs. The entrepreneurs play second fiddle. And then government money often gets involved, but then it dries up. DeepMind is the example to think about. And the companies then get acquired by larger American...
Although you're acknowledging, Keith, that he may have a point. Finally, your post of the week. We just talked about Google and OpenAI. You believe, or the post of the week believes, that OpenAI could take down Google's $260 billion ad empire. Here's how. Is that realistic or just wishful thinking?
I think a lot depends on Google. I don't think OpenAI holds the cards here. They have decided that the 90% of people who don't pay them are going to get ads, and the ads are going to be...
You're not thrilled with that, are you? You've always been very bullish and optimistic on OpenAI, but I'm beginning to sense that you're certainly not keen on, so to speak, this advertising model for OpenAI.
I think whether I like it or hate it is going to depend on the definition of ads in their context. I'm completely in favor of links where they get paid if I click on it. But I don't want those links to be ads as in inappropriate to what I'm trying to achieve. So I think there is an angle there that could work for them. And if it works, it could be valuable to them. Google, I think, holds the cards for several reasons. One, Google has the database of paid links. It knows every paid link and it knows how much it pays, second by second. It's a real-time paid links database, if you will. Google has the traffic to deliver, although it did say this week it has no plans to put ads in Gemini. So that's kind of interesting. But then it comes down to, does it plan to put Gemini in front of users, or is it going to keep putting search in front of users? If it chooses search, I think Google loses. It needs to choose Gemini, and then it needs to figure out how to monetize it. And in the short term, that could shrink its revenue.
much to think about many subjects that we will come back to over the year we've been talking about the adolescence of technology we're certainly in the adolescence of 2026 we will talk next week keith in february 2026 i hope you continue to grow up i will try and we will talk again next week thank you so much bye everyone