Sep 26, 2024 ยท 2024 #33. Read the transcript grouped by speaker, inspect word-level timecodes, and optionally turn subtitles on for direct video playback
Edit labels for this show, save them in this browser, or download a JSON override for the production folder.
Transcript Playback
Can Europe Produce World Class AI Innovation?
Human Transcript
Timed transcript
Blocks are grouped by speaker for readability. Expand a block to inspect word-level timing.
Speaker 2
Hello, everybody. It is Saturday, March the 7th, 2026. What a difference a week makes, as Harold Wilson once so famously said, especially when it comes to technology. Last week on That Was The Week, our weekly roundup of tech news. Keith Teer was asking whether Anthropic was wrong in terms of its pushback against the government. He argued it was, but we were arguing it very much in a vacuum out of the context of the American invasion of Iran. Keith believed that Anthropic was wrong or perhaps is wrong. I wasn't so sure. I saw Anthropic's pushback. in the context of the unusual political situation in the United States. A week later, Keith seems to have changed his mind. This week, his editorial on That Was The Week... is entitled No Good Guys. He's used AI to put Sam Altman, Dario Amadai, and Pete Hegseth in the same room. I'm not sure the three of them would or could ever be in the same room. They all look rather miserable, Keith. So there are no good guys. Does that mean you've changed your mind, that you may, Keith Teer, once in your life have been wrong? What do you think? It was a leading question. I want you to admit you were wrong, not me. I always tell you you're wrong.
The first third of my editorial is reaffirming that I believe I was right, and I still think that Anthropic was wrong. But they weren't the only wrong actors. So in a different way was OpenAI, and in yet a different way again was the US government. And the collective failure is equal to the complete absence of leadership over AI, which I just want to credit O'Malley with putting that thought in my head. He had an excellent piece, which is this week's post of the week.
And this piece, we'll come to it at the end, is the great AI game versus AI theater. Of course, America is the stage of all sorts of theaters, not just AI theater, but AI's piece of it.
So, again... Let's step back a bit. No good guy. So Amadai isn't a leader. Sam Altman isn't a leader. Pete Hegseth certainly isn't a leader. You've left out the guy who claims to be leading the US, and people wonder whether he's capable of it. But there's no point, Keith, of your argument last week about Anthropic being wrong is that AI companies shouldn't be leaders, that they're just...
providers that just... No, they shouldn't be, they shouldn't set policy. Let's be specific. Right. So they shouldn't be leaders. No, you can be a leader. You can certainly be a leader in your, in an opinion of what policy should be, but you don't get to set it.
Right, so leadership, now here's the thing, they don't seem to have an opinion. about the big question of AI policy and the future. They have lots of opinions about contracts that they're negotiating right now, as does the government. But none of the three of them are standing back and asking the big questions about AI. It's almost as if the US is on remote control when it comes to AI policy. And who's the real culprit? It's probably David Sachs, who is the czar of AI. Who is absolutely invisible.
I don't even have a slide for David Sachs this week. He's such an invisible czar. So let's step back, Keith. I said that a week was a long time. Wilson famously said a week was a long time in politics, certainly a long time. when it comes to international politics and war. Since we've last talked, this time last week, there's been this huge war in Iran and a joint U.S.-Israeli invasion or an attempt to invade or at least bomb Iran back into the Stone Age. How has that changed the debate? What's happened in the last seven days in terms of AI policy and the relations between that and the current war in the Middle East?
Yeah, yeah, good question. A few things changed. The first is that when we spoke last week, Altman had gone on the record supporting Amadei's instincts around surveillance and autonomous weapons. Since then, he signed a contract with the Department of War, which clearly had been in negotiation as well.
Isn't that, Keith, classic Sam? Or at least for those of us who aren't great fans of Sam Altman, it's part of his classic playbook of saying one thing and then doing something quite different.
Well, the letter of the actual event is that he agreed to the U.S. 's no unlawful use language, which on the face of it is fairly reasonable. We'll only use this within the law. Now, the second thing that happened is Amadei released an internal memo to his team, which got leaked in the information that made clear that Amadei's belief is that no unlawful is the same thing as saying there is no law. Well,
one of the essays this week, which I thought was really good, Stratechery by Ben Thompson, one of your favorite sets of essays, he makes the important argument that...
There's no such thing mostly as law when it comes to a lot of this stuff in international context. So when we're talking about unlawful, that itself is a meaningless word. I mean, the very US-Israeli invasion of Iran or this attempt to bomb the country back to the Stone Age and assassinate all its leaders, that's by definition against all forms of international law. So is there any value in even using this word unlawful in this discussion?
Well, I think that we shouldn't get into the wordsmithing of it. Well, you're the one who's talking about unlawful. No, but what is it? No, I'm not. I'm describing what was signed. If you ask me my opinion, I'll give it to you. But so far, you asked me what has happened. And I'm saying two things happened. Then a third thing that happened is the US did actually use Claude in the Iran operation, despite the conflict. with Claude, with Anthropoc.
You and I, before we went live, Keith, we were talking about, I got the $100 a month, Claude, you've got the $200. I'm assuming the US has its $200 seat.
Well, yeah, but I think we've got to use it as a prism to properly analyze the challenges faced by AI in the US, because it is a magnifying glass into the present moment that reveals what the role of each of the actors is. And that has big implications for the near future and what happens next. By the way, Google and Microsoft have been silent on the edges, but it did materialize yesterday. the the department of war has a contract for use of open ai with microsoft yeah i
don't want to give away any marital secrets but i can guarantee you one thing google is not silent behind its doors on this stuff um i mean in terms of your three guys these no good guys dario uh sam and hegseth i mean isn't it really shouldn't we can be combining you put this fake photo together they didn't really exist in the same room or fake video i mean should we even be distinguishing between sam and dario i mean they're different kinds of characters slightly different companies but basically there's not that much fundamental difference between open ai and anthropic but of course when you compare them with hegseth in the u.s government you do have a fundamental division
You've got three self-interested players. And as you said, in the absence of clear law, the government gets to decide what to do more than any company would
get to decide well but wouldn't dario say that's not true and in the absence of any clear law especially since this complete um avoidance or rejection of international law by the current administration actually and this is what you and i talked about last week it actually makes the role of a dario or a sam actually much more important because they're players here when when when the government does stuff that is in complete denial of the existence of law. Doesn't it give some degree of, if not legal, moral authority to private companies?
No, what they become is naughty boys in the playground. leveraging the gaps to their own self-advantage. And they're both doing that. Amadei seems to be ideologically driven in that regard. Altman seems to be mainly commercially driven in that regard. But they're both naughty boys in the playground, leveraging the absence of clarity to their own advantage. Neither one of them is an authoritative leader of opinion with the interests of everyone at heart. Neither one. And by the way, obviously, Hexas is not that either.
Does that ever exist? I mean, you've read enough political philosophy, Keith. I mean, every political philosopher from Socrates to Rousseau to Marx have claimed that authority. But one wonders whether there's ever been a government that can claim to speak for all people.
There are in history, repetitively, opinion leaders with moral good and societal good on their side. You think of the thousands of examples, but let's just pick the civil rights movement in America and the role of JFK. Also in the Cold War with the Cuba crisis, there are leaders who think ahead, big picture, and execute against it.
But in the world of AI today... Well, let me just call you on that, because that's wrong. I mean, I'm currently, as you know, I'm in the process of trying to write a book about Bobby Kennedy, and I've done a lot of reading about the Cuban Missile Crisis. He claims in some ways, probably correctly, that he and his brother saved the world from this terrible war with the Soviet Union because of the Soviet establishment of missiles on Cuba. But the truth is much more complicated. They never really tell the truth about what happened in the back channels and the deal they did with the Soviets. So This idea that there was a time when government acted for the people and was trusted and now that's no longer true is also wrong.
Again, I would disagree. I'm not a fan of Hegseth or Trump. But, I mean, would it be fair to say that, and this is where your photo is actually pretty accurate, that Sam sitting at the table with his hand on his head and Amidai looking just as miserable, they're being put in an impossible situation because of the behavior of the US government.
yeah so if yeah as you probably know i'm a great believer in the right of nations to self-determination and i do consider what the us did in iran even though you know i'm not going to cry any tears over the results it was absolutely outside of any international rules.
Well, results, I mean, there are lots of results. I mean, it's one thing, the results of the assassination of their leadership, another of what's happening now. But that's another issue.
Yeah, there's a lot one could talk about there. But that said, in the world we live in, the nation state, especially a powerful one like America, does get to set rules, even if it's breaking rules when it doesn't. And companies don't. And so Hegseth probably is the least culpable of the three because he's just doing his job and he does run the Department of War as it's now called.
They've all done different things wrong, including Hexeth. Okay, let's leave Hexeth out. What if Sam, and then I include Sam in this, what have Sam and Dario done wrong?
I don't know if you highlighted it, but if you look at the quote from Dario's memo that leaked, he has clearly stated an ideological preference, has quoted that the no lawful use phrase, meaning that he's well aware that the government was intending to act, quote, within the law. And he then interpreted that wrongly as meaning there is no law. It's written in his own words. and said in the absence of law, which of course isn't true, there are laws, we're going to decide policy. He says it in his own words.
And we talked about this last week. This is all in the context of the current Trump administration, which many, including myself, and certainly Dario, think are behaving not only illegally, but immorally.
Yeah, we do have room. You seem to be suggesting that he's somehow Machiavellian here, that he knew what he was doing. I mean, doesn't he have a moral position?
Are you saying, Keith, and this is surprising, and you've done a lot of reading around this, are you saying that Dario is purely Machiavellian, that he doesn't care at all, that he's purely using this,
Obviously, I can't read his mind, so the true answer is I don't know, but here's what I do know. He has an ideological disposition, which, by the way, I would broadly agree with. You probably would too. He was in the middle of a negotiation where that disposition dominated his thinking about what the right thing to do was in the contract. It wasn't a commercial set of decisions. It was an ideological set of decisions. Again, URI may have tried to do the same thing.
It got exposed because the government refused to agree. And now his statement this week, internal statement, was an after the fact. It wasn't a mea culpa. It was a here's why I did it and I would do it all over again. And it exposes that he's ideologically driven, which is fine as long as you don't try to set state policy.
Well, but isn't the whole foundation of anthropic... I don't even like this word ideological because I don't know what that word means. And there's a sort of pejorative sense here that if we're ideological, we're doing something wrong. Wasn't anthropic created as a response to perhaps the lack of, or a sense of the lack of morality in open AI? I mean, Dario was with open AI and he split.
Yeah, I think that's accurate. You know, most of my friends... are on his side of that split, morally speaking and intellectually. I probably would be too. But in the cold light of economic reality, OpenAI is winning by far. But being caught up, to give Anthropoc credit where it's due, and Gemini for that matter.
And one of the ironies of this, and I think this is purely unintended, is clearly this dispute, this very public dispute has benefited Anthropic. I mean, at the beginning of the week, regular users of Anthropic, including myself, of course, we couldn't even use it because it came down because so many people were using it.
Yeah, the week's been interesting because I think as every day went by, Amadai looked worse and worse, especially with the leak of his statement. That was a kind of a killer. And Altman is pretty much the same as he was a week ago. Nothing's changed there. And Hegseth, I think your opinion of Hegseth will correlate directly to your opinion of the Iran conflict. And it's interesting. I watched Bill Maher's show last night, which is, for those who are not American, is a comedy show called Real Time with Bill Maher. And Maher is a Democrat. And everyone on the show, including Democrats, had to acknowledge they liked what they're doing in Iran. So there's really not much of an outcry about what's happening in Iran, which is astounding, given that they're... Yeah,
I agree. And I... I mean, I consider myself on the left. I'm certainly outraged. But so let's move on a little bit because we could spend the whole show. And this is not a politics show. This is a tech show. One of the pieces that you cite this week is by the New York Times columnist Ross Duther. If AI is a weapon, who should control it? That seems to be the core issue here. Who is, firstly, who is in charge of AI when it comes to its use, when it comes to the government in war? And who should be?
Well, I think that the answer is the same as who's in charge of battleships. But sadly, with battleships, there's a plan and a process. With AI, it's so new, there isn't. And so the use of AI is pragmatic. It's day-to-day. It's based on situational complexities that we don't know about. And the government, you know, probably we want this to be true, tries to make a rational decision in every single moment what its role is.
Yeah, but you compare it to battleships. There isn't a single company out there. I'm not an expert on the battleship economy or battleship economics. But there's no equivalent to OpenAI or Anthropic when it comes to battleships. Isn't this why this is a different issue? Because one of the things that... came out of this was that when amadai supposedly you say that it wasn't entirely true when amadai pushed back against government he was doing it because he had a degree of power because the government needed anthropic um technology i mean if i don't know raytheon said to the government we don't agree with what you're doing with our battleships we're not going to give you our technology the government say fine they go and find another vendor so the
SignalRanks is an investor in a company called Saronic that produces autonomous battleships. Well, ships in general, but military. And, you know, it's quite clear if you're producing a military ship that it's going to do military operations.
Well, it is AI, but it's AI embodied in a ship. And when it gets a contract to deliver a ship, it knows, because that's the whole purpose of the ship, that it's going to be autonomous. So autonomous doesn't mean no human in the loop. You know, like drones are autonomous. And there's clearly AI in drones to do with navigational and other characteristics. But there's a human in the loop. Eventually, I think we can all agree that we're going to get to the point where there isn't a human in the loop. Probably. It seems very likely. So that question that you put on the screen.
Yeah. And the answer's got to be the same as the answer to who controls anything in democracy. And it isn't the vendor. I mean, who shouldn't control it? The vendor. Who should control it is the authoritative user. In democracy, that's the government. And even in a dictatorship, that's the government. So the Chinese government make decisions. The Russian government make decisions. The American and British governments make decisions. The French government makes decisions. And no one would ever believe that that decision should sit anywhere else. Yeah,
but one of the other pieces you linked to this week is a very interesting conversation between Yasha Munk and Danielle Allen, on both of them are prominent political thinkers on this kind of crisis of traditional top-down liberal politics and an increasing focus of people like Munch and Alan, many others as well. I've had them on my show for what we call participatory democracy. So in terms of this AI debate, it's more than just, oh, well, Hexer should run things. Trump should run things. Whoever's in government should run things. Something is changing, both on the left and the right, and that this idea of an old-fashioned technocratic liberalism now is being challenged by participatory liberalism. And I'm sure that the participatory liberals, whether it's Daniel Allen or many others who were writing on citizen assemblies and many other things, are all beginning to wonder, in this new age, participatory democracy should have a role in controlling AI, if it is indeed, which it is, a weapon.
I think that is the right way to think, because... Unfortunately, democracy is representative democracy and your ability to control is delayed by in the US every four years or every two years if you count the midterms. So the ability for participant electors to control outcomes and policy is there, but it's time delayed. And because it's representative, it's subject to capture. by lobbyists and others. So it isn't a perfect participatory democracy by any means, but at least compared to a dictatorship, the people do have a way to change policy and use it. And clearly governments do change. So we kind of do have that, but it's imperfect. I certainly feel like with AI, the playing board changes weekly. and the challenges of what it is you're controlling and in what context changes weekly.
Right, and it's that changing weekly which makes the idea of participatory democracy, and it's really what technologists might call real-time democracy, not just intriguing but essential. If everything is changing weekly or sometimes daily, I mean a week is a long time in in any week of technology, Keith, you and I know this from doing this show for several years, then we've got to rethink the nature of government. That's not for us on That Was The Week. We're a tech show. But I think that that's why including the Munch alan conversation is useful i dealt with it a lot as i said on the show a couple of weeks ago i had the yale political thinker helen landamore on the show she's another leading thinker here so in the sh in the meantime uh and i'm quoting the end of your editorial the question of who sets ai policy deserves a serious answer that goes without saying This week proved that nobody currently in the room is capable of providing one. It wasn't Sam or Dario or Hegseth. What should be done in the short term for next week, for example, when these issues haven't gone away and perhaps in some ways they become even more salient?
Well, I think in the spirit of writing an open letter, I would write an open letter to David Sachs, who Trump has given AI to as one of his domains. Sachs has done a very good job in crypto of setting rules which have gone through Congress and have become or are becoming law. that is very different to what the case was before. He hasn't done that with AI. With AI, in some ways rightly, so plaudits to him in some ways, he took a hands-off approach, which was mainly focused on regulation as a bad idea. But there's a difference, and liberals need to understand this as well, there's a difference between regulating something and setting policy for something. Regulating is generally how to stop bad things or good things happening, depending on your point of view. Policy is about how to allow good things to happen.
I don't know if everyone's going to be happy with David Sachs as he's not my god, might be yours, Keith. Let's move on. This is the subject we will no doubt come back to probably next week. A couple of other interesting essays you have from... heavyweight thinkers. You've got the Krugman essay. Krugman now has gone over to Substack. He no longer writes for the New York Times. The economics of technological change. What is Krugman saying here that's different from anything else anyone's saying on the economics of technological change, particularly AI?
relationship between technology and jobs the relationship between technology and wages and the relationship between technology and the tendency to monopolies and oligopolies all three of which seem to me to be crucial talking points at this moment in history so that's why i put it in and there are probably are people who will go and pay
There is no plan for what happens to people when AI replaces jobs. So number one, he's right, technology and jobs, and what happens after is key. I have opinions about that. Technology and wages, typically, historically, wages have gone up over time as the working hour has shrunk. And that is one of the ways of capturing productivity and progress. If in capital and labor, if AI removes labor as being required, then the question of wages doesn't arise, but living still does. So there's a whole set of discussions around how do you live after wages? And then the third one...
Yeah, and this is your... I'm not going to get sucked into this one this week. This is your musky and utopia of post-money, which I'm very sceptical of. You also include... Wait, wait, wait.
Andrew, just one thing, because the third one's important too. Monopolisation and oligopolisation. In other words, big things get bigger. How can that be transitional? to a post-labor society. That's another conversation. In other words, maybe getting big is a gateway to changing society where capitalism results in a post-capitalist reality. Not a revolution, not communism, but capitalism itself is so successful that it creates the condition for a post-capitalist society.
That sounds to me like sort of Hegelian or Marxist sophistry, but maybe we'll come back to that. I'm sure we will. Another heavy hitter you're linked with, and this one, I think you have access to the entire piece, Another wealthy tech guy, Tim O'Reilly, who's always very wise on these sorts of things. How we bet against the bitter lesson, skills and the future knowledge economy. Is Tim in sync with Krugman? Politically, they're in pretty much the same camp.
He basically has this thing called a bitter lesson, and the bitter lesson comes from an essay by Richard Sutton, which means that, you know, basically methods of leveraging computers have always beaten approaches that try to capture human knowledge, like chess engines will beat the best chess champion, is an example. So he's in this kind of awkward place between, yes, computation is going to become more and more capable of being better than humans. But what does that mean for humans?
Well, that's the trillion dollar question, which again, we will come back to very briefly. I included as my interview of the week, One with Tom Wells, a journalist who's written a book called The Kissinger Tapes about the behavior of Henry Kissinger in Vietnam. He had access to all Kissinger's phone records. And we've talked about this in the show this week or on my show, Keen on America. In some ways, things have changed. In some ways, they haven't. I mean, Hegseth and Trump are behaving very much like Kissinger and Nixon in Iran. So if you want to remind yourself that a lot of these issues, at least, aren't new, watch my interview with Tom Wells, also my interview on the Pentagon Papers with Michael Ellsberg, the son of... a son of Daniel Ellsberg, who published the Pentagon Papers. Then you're start over the week, Keith. Sorry, I interrupted you.
Isn't Kissinger, if you look at your five key takeaways on that interview and you forget who you're talking about, it sounds like Sam Altman to me. He lied more than expected.
Right, and I'm no fan of either Kissinger or Altman, so aren't you coming into my camp on this one then? Well, that's a judgment issue. Everything's a judgment issue, Keith.
You always hide behind this when you don't like the outcome. No, because I think you can admire the ability to get things done that include lying, callousness, dodgy morality, being two-faced, and being banal or evil. The fact that you move things along and get things done maybe trumps those five points.
Well, I'm not gonna get sucked into Trump himself, but I mean, the whole point of this conversation with Wells, as well as the stuff on Ellsberg is that Kissinger or Nixon's indifference to human suffering ultimately costs not just them, of course, but particularly the country, not to mention their victims. So I certainly don't think this in any way legitimizes Sam Altman. Moving along, your startup of the week is an interesting one. I'm less interested in the company as the implications. JobWrite, according to one piece this week, did a 5 million ARR, whatever that means, with nine people. Are we increasingly, and I think this touches on some of the bigger themes here, Keith, are we increasingly coming to a point where these large companies are going to get run, not maybe by nine people, but by five or one?
Well, a company is basically, in the abstract sense, it's capital laborer. revenue and profit. And if you can, if you can, you know, have an equation where most of your value is in on the capital side, and labor is largely, you know, is human in the loop to an automation engine, which is what SignalRank is, then revenue and profit can get really big. So that's the definition of a company.
In other words, and this comes back to all these, I don't want to get into the economics or even the technology of this, but when it comes to morality, we're going to have these huge companies of the future. massively valuable run by a tiny group of people i mean whatever one says about dario and sam i mean they have thousands of people working for their companies but the the anthropics and the open ais of the future keith are they going to be run by tiny groups of people maybe even by single individuals
Well, that's very provocative. Finally, your post of the week is by your old friend Om Malik, one of tech's wisest men, which seems to bring everything together in terms of our conversation this week. The great AI game versus AI theater. What is Om saying that is a good conclusion, not just to your newsletter of this week, Keith, but to our show?
Well, his starting point is the Chinese Communist Party produced a five-year plan for AI in China that is published and readable in English. And he contrasts that with... the absence of any plan in the United States. Yeah, no good guys,
And then the second thing is the actual strategy. The Chinese strategy is founded on open source software being used in pretty much every element of the Chinese economy. as a bottoms-up kind of organic and viral process that empowers individuals and small businesses to leverage AI, which is, you know, kind of the opposite of how you normally think of China.
fairer, more moral than the U.S.? ? Well, he isn't saying China is democratic, but he's saying that approach to technology is inherently more inclusive and bottoms up. So he is saying China's better than the US.
He likes that strategy, yes. So is that the answer, Keith? Ultimately, when it comes down to all of it, that when you're looking for no good guys, is the good guy in the room going to be Xi or some American equivalent of Xi who determines all this?
Well, it's starting to be practical to do what the Chinese are doing. Apple this week announced a new MacBook Pro that has got 128 gigabytes of memory. That is more than enough memory to run a very, very powerful AI model on your laptop. And OpenClaw can run on top of that. So you can have your own agent on your laptop running.
Well, I'm saying the preconditions for the Chinese model of working are being put into place by companies like Apple. And the latest Intel chips also are these system-on-a-chip with a lot of memory that can run big AI models. So we are getting towards a place where you could run AI for free locally, without any reliance on either.
Right, you can do it on a MacBook Neo. So for those aspiring Gs or Pete Hegseth, go out and get a new Apple, all new MacBook, Neo only costs $599. Is that the only cost, Keith, of being the next Xi, $599?
There you have it. If you've got $5,000 lying around, you can become Xi and become a dictator. Keith, as always, pleasure. And we will talk again next week. Very tech-centric news week, I'm sure, is to come. So we'll talk next week. Thank you so much.