May 9, 2025 ยท 2025 #18. Read the transcript grouped by speaker, inspect word-level timecodes, and optionally turn subtitles on for direct video playback
Edit labels for this show, save them in this browser, or download a JSON override for the production folder.
Transcript Playback
Who's Cheating?
Human Transcript
Timed transcript
Blocks are grouped by speaker for readability. Expand a block to inspect word-level timing.
Speaker 2
Everybody, another week of technology. And we're back with my old friend Keith Teer, the... publisher, to put it pompously, if that was the week, an excellent newsletter. Arthur C. Clarke, one of the great thinkers on technology, had three laws. And the first law is perhaps the most famous, the one that suggests that any sufficiently advanced technology is indistinguishable from magic. And one of the implications, I think, of that is that any sufficiently advanced technology, when used, if it is indeed indistinguishable from magic, appears as if we're cheating. Because magic is a form of cheating, and that's the subject of this week's newsletter on a very advanced technology, AI, perhaps the most advanced technology that humans have ever created. title of Keith's editorial this week is Who's Cheating? This is related, Keith, to Arthur C. Clarke's famous law, isn't it?
I like the association. Yeah, it's not my... I didn't pick it. I wish I had. But yeah, this is about actually a Columbia University student who was expelled recently because he was deemed to have used AI on his projects and for about 80% of the work, and that was deemed to have crossed a line, and he was expelled.
Yeah, James D. Walsh, well, it was from a New York piece, a New York magazine piece, Everybody is Cheating, and it's about a character of chunking Roy Lee from Columbia University.
It kind of raises a question. It's almost as if when the first car was invented, somebody riding it would be accused of cheating because they're not using a horse and cart, and therefore they could get faster than the horse from A to B. Cheating defined as using new technology is really the crux of it. And of course, it does look like cheating from the point of view of the past. But from the point of view of the future, it probably looks like early adoption. And the fact that this kid was expelled really shines a light on, in this case, academia, although I think the same could be said for almost any profession,
where the idea of using these tools to do better is not okay.
Words and timings
wheretheideaofusingthesetoolstodobetterisnotokay.
Speaker 2
Yeah, and of course, you and I were talking before we went live about the increasingly obscene cost of college. I think sending a kid to a private university these days in the United States for four years is going to cost you almost $400,000. There is a connection between the crisis, an intimate connection between the crisis of the university and these new tools, which are increasingly making going to college, if not redundant, certainly forcing colleges to actually rethink what the whole purpose of going to college is.
Exactly. You couldn't have said this before, but I think universities are a little bit like music publishers or book publishers in that their business model is to capture the talent. So they hire you as a professor and they give you a guaranteed job. And then they lure students to pay to get access to the talent, an increasing amount of money. obviously the bulk of that money goes to the university authorities. Some of it goes to the talent and parents are left paying huge sums of money for this access. So it's a business model that can only be sustained if talent is in short supply. And with AI, talent's going to become both free and ubiquitous.
Well, the question, Keith, of course, that everyone's asking is, OK, we got AI. We know that it enables us, as you say, we're no longer walking. We're running. We're on a car, we're in a train. So what should college do in an age of AI? I mean, the internet's already made a lot of the traditional college redundant anyway. I mean, Google, you could essentially go to university just by using Google in terms of libraries and access to information.
With AI, maybe not quite AGI, but certainly sophisticated AI, What's missing? What can college do to help people prepare for the world and be more valuable, both to society and to themselves?
The way I think of it, the very best future students will be not only intelligent, but the best at using the available tools. And so there'll still be a distinction between performance levels and that will not be something you could cheat on. As we all know, you can lazily use AI by giving it the question and copying and pasting its answer. That almost certainly will be a poorer answer than somebody who understands how to use reasoning models, how to go back and forth with the AI and not accept its first answer, that will question facts that are questionable or wrong, that student's going to do a lot better. And because doing that requires knowing the content, you can't know AI is wrong unless you know. The better students will use AI and still be better. And so what colleges should actually do is become the biggest advocates of AI to take away from teachers the mundane elements of their job and to train the teachers in this new world of new tools to distinguish between good and bad outcomes. And I think that's going to happen anyway, right? It's inevitable. The only question is, is it already happening? And the evidence from Colombia is no, there's going to be a resistance period of trying to slow this down. And during that resistance period, you know, Sequoia Capital had an offsite this week. And one of their three areas of opportunity was education.
Yeah, I'm not sure if AI university is a contradiction in terms. I'm just not sure if there's any point in universities in our AI. Or certainly in the broad concept of a university, of going away to school and paying hundreds of thousands of dollars to become an adult. It's increasingly unconvincing. And you have two kinds of hysteria. You have the reactionary hysteria amongst the educated elite about the decline of civilization. And then, of course, you have the hysteria amongst... the bureaucracy and the academic elite about undermining their business model and way of life.
But Andrew, what do you think? I think that credentials and both peer judgment and the judgment of more experienced people with younger people, credentials and judgment won't go away. Universities probably should and will. But I don't think credentials and judgment... What is a credential without a university? What is a credential? Well, that's a great question, but I think there might be some good answers to it.
It's a business opportunity. So I would say in my case, it would be some things won't change. Understanding of the body of work for a discipline will still be important. Being able to take your own point of view on that body of work and its relevance to now will still be a skill. Being able to define strategies and tactics from a body of work related to a problem will still be a skill. So there'll still be a distinction between, just like when you hire someone, you try to discern how good they're going to be at the tasks that you're hiring them for. That doesn't go away.
It's a dying world, Keith. But the world that you and I grew up with of universities, of that kind of credentialism is dying. It's being replaced. It's not entirely clear what it's being which I think explains the hysteria, the reaction. And I think it also explains a lot of the weird hysterical politics of our age too. The hatred of expertise on the one hand, and then the hatred of experts, of the people who are hating them. Meanwhile, on the web itself, something may be dying, which is the web, according to Casey Newton. He writes about something called the dying web, and that's not being changed to another color. Dying means that the web is going away, according to Casey Newton.
Yeah, I'm mourning the web. It's gone. It's finished. How many pieces, Keith, have we read or talked about the dying web? I mean, speaking, what was the remark about notes or announcements of my death have been exaggerated? Certainly the web have the right to say that.
Look, the web defined by Casey really means the open internet with www addresses. It's the Google. It's the Google web. It's the Google web. It's Web 2.0. Right. And this isn't really a lament for the web, actually. When you read it, it's a lament for the challenge of publishers getting traffic. He believes that... the blue links in Google are the only way that a publisher can get traffic to its content. And he thinks that as Google's web search is replaced both by itself and by others, by AI, those blue links are going away and therefore publishers' traffic is going to shrink and therefore traditional publications are going to be in more mortal danger than they already are. That's really the essence of what he's writing about. And I do think that is true and false at the same time. A little bit like the education debate, good content is still going to be in demand by educated consumers of content. Finding it is going to evolve from web search to all kinds of other things. Most of the AIs now have references and links. And the AIs are mainly subscription models. So it isn't yet at least an advertising medium, although OpenAI is talking about bringing advertising to AI. I think that'd be a terrible development. So I think we're moving from an advertising-based blue links search web to an AI-based references and links subscription internet, where the body of knowledge is available quite a lot of it for free, and then some of it if you pay a subscription. And you will still discover things. And I also think that subscription via tools like RSS to feeds from the New York Times and other like publications will still be attractive to people who want to read those publications.
Well, that's what that was the week is. And that's your, I'm not sure it's a business model, but that's your modus operandi with that was the week. This week, Anthropic launched its clawed web search API. You've been talking about these sorts of initiatives for a while now. With these web search APIs of AI companies like Anthropic, does that make traditional search redundant?
I think it depends on the use case. I would still use Google if I'm looking for the phone number of a local restaurant. I wouldn't use Google if I'm looking for the recent thinking about voice-driven AI in the context of education.
But Keith, you're sufficiently living in the future. I hope you never call your local restaurant. In fact, I hope you never even use your phone. I don't.
Why would you ever call your local restaurant? Well, it's just an example. Google's useful for phone numbers because it's deterministic. Nobody uses the phone anymore. Think of it as deterministic versus probabilistic.
You're telling me to shut up. My substantive point is that deterministic outcomes, Google search is still pretty good for. Probabilistic stuff, AI is better at. And Google, by the way, if you think of cheating, and slowing things down, as we talked about with universities. Google's introduction of AI at the top of search is its version of trying to slow down this change. But in doing so, it's already telling us it knows AI is better than search results for a lot of things. And therefore, it's kind of predicting its own future demise. And its challenge is the innovator's dilemma is how aggressively to close down the past and embrace the future without killing itself.
Yeah. And it's, of course, Microsoft went through that and it seems to have survived. Google now is going through the same experience. It had a very bad week. It's shares slid as it became clear in one of these antitrust trials that Apple is seeking AI alternatives to Google search. So what's the connection with Apple here? It was a better week for Apple than it was for Alphabet.
You know, my kind of conspiracy theory brain kicks in on this one. I think Apple and Google have a common interest in fighting off regulatory attempts to break them up. And I think Apple here is coming to Google's rescue by helping make the case that Google doesn't have a monopoly and that the market is entirely capable of killing Google. So there's no need for the government to do so. So I think... Eddie Cue is probably telling the truth.
And Eddie Cue was their former VP of, what, strategy?
Words and timings
AndEddieCuewastheirformerVPof,what,strategy?
Speaker 3
He still is. He still has the same job. So I think when he testified and said this, he wasn't lying. But I do think it was helpful to Google for him to say it.
Yeah, and it was the same VP of... legal strategy or regulatory affairs at Google who seem to have screwed that one up as well. Meanwhile, MG Siegler is writing about good vibrations between Apple and Anthropic. Is there a new alliance? I mean, Apple couldn't acquire Anthropic, but certainly those two A's, are they the new powers that be in comparison with Google? Yeah.
Yeah, you know, we actually said a few weeks ago that Apple should acquire Anthropic, I think. Or was it perplexity on both of them? I think it's one or the other, actually. And this is evidence that we were smelling the coffee in the right way. This is a potential deal for Apple to build Anthropic AI, already built in OpenAI, now Anthropic AI, but in a specific context of Apple's tool called Xcode, which developers use to write applications. And Anthropic is the very best coding AI. And so it would represent quite a big upgrade for Xcode to have Anthropic built in. When I write Apple apps, I use Anthropic inside cursor and and xcode is is not possible to use but i can use it to look at what anthropic did this would take things a step further where i could use xcode directly so it's mainly a developer facing story but it's also an admission by apple the automated code writing is the future yeah and another of the pieces you
linked to this week is with Apple's eyeing its move to AI search, they're ending the Google era. And we've been talking about the end of the Google era for a while, but it's clearly coming. It doesn't mean the end of Google, just as the end of desktop publishing and the internet didn't mean the end of Microsoft, but Google's having to radically reinvent itself too. Everyone's moving so fast, Keith.
It is. It's probably the I don't think there's ever been that I remember something that changed so rapidly. Even the internet was kind of like the same thing for 10 years, just evolving. Even the move from web one to web two and the slightly abortive move to web three was a continuum, not dramatic. This is dramatic. We're moving rapidly to multiple AI agents talking to each other to accomplish tasks.
Yeah, and meanwhile, as we noted earlier, the universities and other institutions are standing still, falling further and further behind. Meanwhile, the big story that I think you perhaps underplayed because you have a particular affection for OpenAI is the story about, and this is how MG Siegler put it, OpenAI attempting to have their for-profit cake and eat it too. In other words, Sam Altman came out and said OpenAI wouldn't be shifting to becoming a for-profit company. It would be still a non-profit, but nonetheless, it would take that $30 billion investment that it got from SoftBank a few months ago. Do you see this as a big story? Is this in any way undermined your confidence in OpenAI? If it doesn't, then I don't know what will.
You know, I think, firstly, let's just explain what happened. What this represents is OpenAI conceding defeat in the attempt to wholly transform to a full-profit corporation. due to the complications of valuing various stakeholders' stakes in such a change and the legal challenge from Elon Musk.
Right, so a huge victory for Musk, huge loss for Altman. So what they've now done... And just sorry to jump in here, let's just remind ourselves that OpenAI was co-founded by Musk and Altman, and then Musk went off in his Muskian way and a half, and now has spent the last few years trying to undermine both OpenAI and Altman.
But anyways, so what they've now done, and the detail is super important here, is they've removed the cap on the for-profit side's value. So people like Microsoft who own shares in the for-profit side were capped. They're now no longer capped. So... Governance-wise, they've retained the authority of the not-for-profit, governance-wise, but not value-wise. They've fixed the value of the not-for-profit as a shareholder in the for-profit, so it's not going to grow any more percentage-wise, and it may not even grow in absolute terms. So the future growth is all inside the for-profit, and it's no longer capped. So actually, OpenAI has fairly cleverly navigated its problems to create the outcome it wanted, which is a for-profit winner inside of its existing structure.
Well, that's the best case reading, Keith, a bit. For our interview of the week, I interviewed Keech Hagee, who is Wall Street Journal writer, one of the best, an award-winning writer. She's also the author of The Optimist, the new biography of Sam Altman that's out in the next couple of weeks. Her reading, I'm not saying she knows more than you, but she spends a lot more time on this, is she's much more in the M.G. Siegler camp, is that What's emerging at OpenAI is a profoundly unnatural company, a company which is trying to have it both ways, both being for-profit and non-profit. And in the long run, it's just not viable. Just to borrow a word that you used to like and perhaps still likes, OpenAI is a unicorn in the real world. And as we know, there's no such thing as actually unicorn.
Well, I think, look, I don't think they're trying to have their cake and eat it too. I think they're now unambiguously in favor of being for profit. There's not even a narrative any longer that their mission... What did he say this week?
It's a victory because they couldn't go the whole way. They had to go halfway. to accept the continued existence of the not-for-profit and its governance role, but they've kept its economic upside. So they're drawing a line starting today on where the value growth sits, and it's inside the for-profit, which they are unambiguously in favor of. So I don't think they're being hypocritical at all. I think they're legally challenged to accomplish what they wanted, So they figured it a different way. But they're not hypocrites.
Well, I'm not talking about hypocrisy. I'm talking about the creation of a company that just doesn't make any sense. At my startup, we had a guy, a typical sales guy, and he was always full of these cliches. And the one I always loved of him is you can't be half pregnant, which is certainly true of sales. Either you have a sale or you don't. And it seems to me with what he's doing is he's trying to make open AI half pregnant.
Well, tell me if you think this is fair, Andrew, but I think Altman for the last two years has agreed with that and determined to get rid of the not for profit. So he kind of agreed and said, let's get fully pregnant with profit. Why? Because the cost of building up now is so huge, it needs large amounts of money, and money doesn't come unless the people supplying it can make a profit. So I think he's been on that side for a couple of years.
No one's arguing that, look, it's obvious, and that's why he got pushed out in the first place, and this is what I talk about, to Keech Hager about, and she's very, very good on this. I mean, it's obvious he wants a for-profit company. He's not committed to the nonprofit ideal. That's why he got pushed out in the first place. There's no doubt of what Sam Altman wants. The problem is that he can't get what he wants within OpenAI.
Actually, I think if you read the detail of this outcome, he's got exactly what he wants with one compromise. The not-for-profit still has a governance role. So why is it a victory for Musk then? Why has it been interpreted as a victory for Musk? Well, because Solomon couldn't get 100% of what he wanted. He got 80% of what he wanted.
So here's the Keech-Hagee reading, which I think is pretty interesting. She thinks that this will make it harder and harder for actually OpenAI to raise money in the future because of this cloudiness, the complexity of this weirdly unnatural corporate structure that he's building. Maybe we should bring her on this show and we can discuss it. But anyway, let me finish. So her argument, and I think this makes more sense, it's kind of interesting, is that ultimately,
Altman's most important relationship is with Trump. And she thinks that eventually, if OpenAI... I'm going to cough again, excuse me. If OpenAI can't raise traditional venture money because investors are not comfortable with this weird structure, then it will result in the US somehow supporting... open AI, because her argument, and this makes complete sense, is that certainly a Trump administration, given the threat from China, cannot afford to have a failed open AI. So what we're sort of creeping into is a new age of national capitalism.
What does that mean? Sounds like some Yiddish thing.
Words and timings
Whatdoesthatmean?SoundslikesomeYiddishthing.
Speaker 3
It's a fixed belief that governs all your other beliefs. And I think she's got that. Look, firstly, I think the outcome of this is that OpenAI will be more capable of raising capital than ever before because they just uncapped the upside. So the idea that they won't be able to raise capital, I think, is wrong. Secondly, I think Altman has understood something that others are coming around to, which is all the money in AI is going to be in the application layer. An open AI, that is to say the layer we all use, an open AI is far ahead of anybody. It now has more than a billion active users using it at the application layer.
Yeah, well, there you go. And by the way, that's the CEO of Instacart, a public company that's doing super well, choosing a career path to run applications at OpenAI. And OpenAI historically has been thought of as an infrastructure company. I've always said on this show that the stack in between the application layer and the infrastructure is going away and that the winner will own the whole stack at the application level. And we've been saying that for a year here. And this hire says that OpenAI believes that as well. It's huge. It's a massive signal to the industry what's happening. And Google should be threatened by that. Microsoft should be threatened by that. The Sequoia Open Day this week, they argued that software is going away. Software defined as middleware and things that the AI is gonna own software, not just infrastructure. So it's a massive, massive bed on trillions of dollars of value.
And this woman is called Fiji Sima. Actually, she looks like a ghost from the photo. I'm not sure if she clearly isn't one. We shall see. So the stakes are enormously high. And also, one of the other pieces of this, which is interesting, is it's cooling the relationship between OpenAI and Microsoft and between Nadella and Altman. You're still very bullish on Altman. I have a feeling that he's a brilliant salesman, but this may be one sales pitch uh too too far that this will be a bridge that he will fail on but we will see um finally when you say um the future is in applications what will that mean for open
Well, think of an application as something serving a human goal. And human goals up until now have been mainly served by these form-filling web services.
You know, think of Figma, for example, for design, which made some announcements this week, or Adobe. It is very, very unlikely that custom software for specific use cases is going to be needed in the future. You're going to be able to do everything you want through a single interface that will branch off into functionality, depending on what your goals are. And the computer, if you will, like in Star Trek, can do everything. And obviously on Star Trek, that computer is... envisaged as some kind of an ai but it can do everything even make food so you mean
you can call it and for as a restaurant because you're you're in the business of calling you're you're the last man using a telephone call your local restaurant you mean you can call open ai to get takeaway you'll be able to call your robot to make it for you if you want but i have to admit i'm not convinced you and i have argued on this one now for months if not years about open ai we will see who is right on this one.
Well, that's the question. I mean, you should read, you should watch my show with Keech Hagee, whose job it is at the Wall Street Journal. She's a very credible reporter, has been to watch open air. I don't think she's against open air.
Well, you've got shibboleths about whatever that stupid word means about, you know, Star Trek. It's just, and you say, well, I asked you what application layer is. You said it's everything and it's nothing.
Well, the word application means applied to something for humans. That's what application means. It means you have a task. AI is gonna be able to do pretty much any task and it's happening quickly.
Leaving aside that, and that's a sort of an abstract notion, what becomes of the community within AI that's committed to nonprofit? There's still a significant amount of people there. who are opposed to Altman's vision of the company. What becomes of them?
I'm going to answer it. I'm telling you, that community is alive and kicking. They were all at the Meta conference this week, which has an open source LLAMA model that is quite good. Alibaba in China has the Quen model, which is the equivalent, which is also very good. And OpenAir, by the way, in a testimony to the Senate this week, confirmed that it will be releasing what it believes is the best open source model later this summer. So free open source models are not going away. Just like in the computing era, Ubuntu and Linux didn't go away.
Yeah, but they're designed for these companies to promote the interests of the companies, which are for profit. I mean, Mark Zuckerberg has never given away anything in his life to anybody.
No, you're wrong, Andrew. They're released under an open license. with weights so that anyone capable of doing it can take them and build their own stuff using them. They're totally open and no money flows. And so there is a huge not-for-profit part of this. Let's just be clear. OpenAI is not, does not want to be, and has declared itself to not be a not-for-profit. It's a for-profit, but it has some non-profit governance. So there's no ambiguity here. It's a capitalist enterprise, but there's a huge not-for-profit open source part of AI as well, meaning everyone in the world can build their own applications if they want.
Well, we will see. This is a subject that's not going away. I think it's the first real unicorn but in my own version of what a unicorn is or isn't. Lots to talk about, Keith. We will come back next week. Meanwhile, Spurs are playing Manchester United in the Europa League final. So maybe we can do a special edition of That Was the Week from Bilbao in a couple of weeks.
Wouldn't that be? You know, I'm scared to go because you've beaten us three times this season already. And, I mean, a fourth seems unlikely statistically.
Well, the other three are completely meaningless games. And I'm just as scared as you are. We both expect disaster. We will see. Maybe we can, if we lose, we can use the excuse that we're a non-profit.
Well, we're certainly not winners, that's for sure.
Words and timings
Well,we'recertainlynotwinners,that'sforsure.
Speaker 2
One thing we can say about Daniel Levy at Spurs is he is all for profit. There's no nonprofit at Spurs. We will talk again next week, Keith. That was the week. Fascinating week. Open AI, Google, Apple, more of this next week. Have a great week, everyone, and we'll see you in a week. Thanks, everyone.