Jun 22, 2024 ยท 2024 #22. Read the transcript grouped by speaker, inspect word-level timecodes, and optionally turn subtitles on for direct video playback
Edit labels for this show, save them in this browser, or download a JSON override for the production folder.
Transcript Playback
Accelerating to 2027?
Human Transcript
Timed transcript
Blocks are grouped by speaker for readability. Expand a block to inspect word-level timing.
Speaker 3
Hey everybody, it is Friday, June the 21st, 2024. It's a Friday, that was the week, our roundup of tech news in Silicon Valley with Keith Teer based down in Palo Alto. And in Palo Alto, it seems, Keith, people are partying as if it's 2027 and AGI has arrived and...
The butts have taken over the world. The title of your that was the week newsletter this week is accelerating to 2027. What's the big deal about 2027? Is this Silicon Valley version of a 1984?
Well, 2027, it seems, has become the default year by which the pontificators about AI are claiming that we will have superintelligence, which is a new phrase. And interestingly, the author of the main essay this week talks about 2027. So does pretty much everyone who responds to him. And superintelligence is even in the name of the new company that Ilya, whose second name I'm not going to try and pronounce.
Yeah, I can pronounce him. We'll put him up. Ilya Sutskeva, who was OpenAI's former chief scientist, and I think one of its co-founders, launched a new company. So what's this new company called?
And this essay that seems to have sparked a certain furor in Silicon Valley is by a character called Leopold Aschenbrenner. We're looking at his X page. He looks like a bot to me. He looks about 15 years old. It looks like a character out of... Death in Venice or something by Thomas Mann. How do we know this guy actually exists? He's from something called situationalawareness.ai.
I think the only reason we know he exists is because smart people who are real have responded to his essay on X. So quite likely he exists and he's responding back. He does look 50. He looks awfully young.
Well, he reminds me of Harry Stebbings when Harry first started. I was one of the first interviewees on 20VC, and I remember Harry looked about that age at that time. He still looks pretty young.
So what's situationalawareness.ai? What does this term even mean? I know the essay this week that sparked your editorial is about something called situational awareness.
He's basically rocket science. He begins this essay, situational awareness, the decade ahead. You can see the future first in San Francisco. That's the kind of thing that would be written by a bot. It's hardly original. It's been written about a million times before.
Yeah, so situational awareness is an elite group of people who really know what's going on, which he counts himself as being among, in quotes. And he proceeds at great length, I will tell you this, it's a 165-page essay. And young though he is, it's extremely well written, well presented. Graphs illustrating it are well made. And he has a very clear case. His case is that the cost of artificial intelligence is coming down. The ability per dollar spent is going up. And that in his view, by this time next year, AI bots will be able to talk to each other to improve each other through reinforcement learning and that by 2027 will be
at a time when there will be literally hundreds of millions, if not billions, of independent bots pursuing various tasks without the need for humans to get involved.
Rather like maybe one of them would be this Leopold Aschenbrenner kind of guy. Is this Ray Kurzweil's singularity? I don't know what year he wrote about, but he also predicted that this would arrive. I think it was either in the late 20s or 30s.
Well, the singularity is, I don't think it's really dated. It's a directionally correct prediction without a date. But Kurzweil basically says that innovation will accelerate to the point where the human brain can't catch up with it. which I will say, even though it still sounds a little bit science fiction, is less science fiction than ever before as you get to grips with what these people are doing. It doesn't seem impossible that machines all by themselves, working together and against each other so they teach each other, will be able to leverage their training to come up with ideas with insights, let's say, that humans wouldn't be able to come up with.
Keith, in a previous life, you were a student of German historicism. And Germans, of course, love to think about history in terms of its inevitability. Is this what Leopold Aschenbrenner is arguing in situational awareness that technology now, Moore's law and the falling cost of AI is inevitably leading us to AGI?
Well, he calls it superintelligence, which is even beyond AGI. So the definition of AGI is it's indistinguishable from a human. The definition of superintelligence is it does things no human could do.
That's quite a dramatic question. prediction from a young man like Leopold Aschenbrenner. How seriously should we take it? I mean, Gary Marcus, of course, our old friend has jumped in here, one of the great skeptics of AI. What does he say? And you have a link to his piece in this.
Gary makes fun of Leopold in the following way. The first is he accuses Leopold of only being able to draw graphs that extrapolate using a straight line from something current to the future and misses out what the limits might be to that graph continuing. So in Marcus's world, LLM's intelligence is capped. He may not even concede they have intelligence, but insofar as he does, it's limited by the fact that the core technology is is employing statistically driven word finding and therefore has no knowledge of what it's saying or how realistic what it's saying is.
So we have no proof this guy exists. It certainly doesn't look very real. And you're saying the only reason we should take this seriously is because what you call serious people have taken it seriously, but they could all be deluded too or tricked,
couldn't they? No, that's not my argument. My argument is more, is different. And I'm not going to join either camp, but I think there's merits in both camps. Not to become Leon Trotsky here that always try to find the middle ground. But basically, Leopold is... Trotsky wasn't a middle ground kind of fellow, was he, Q? Yeah, she was in the context of the history of the Bolshevik Party, but we won't get into that.
So basically, Leopold is right that LLMs... already do things that logically they shouldn't be able to do given the limits of the technological stack. The fact that an LLM can write code and the code can run doesn't follow from the fact that it's merely a statistical word choosing engine. So it is quite remarkable that at scale, and this is something Ilya says in various videos this week, these LLMs are surprisingly good at lots of things. And surprisingly is the key word. You wouldn't logically predict they could do that, but they can. So, therefore, let's not rule out that as they scale and become cheaper, they'll be able to do even more things that would be surprising.
I mean, this is all, to me, very abstract speculation. It's just science fiction. But does it tell us in a week where... And you have one of the... news of the week pieces that NVIDIA now has ascended to the most valuable company of the world, maybe has echoes of the dot-com boom, that you, and not maybe you, but your fellow VCs and other tech people in Silicon Valley, they want this stuff. There's a hysteria down there. So when something slightly ludicrous like Ashton Brenner's piece comes out, they all embrace it or react against it.
Yeah, but you can't want something into existence that isn't real. It either can or can't happen. That's a pretty binary thing. So even though you may be right in describing the hopes of Silicon Valley, that's not going to make these things happen. Ilya and OpenAI and... Anthropic, if anyone makes it happen, it's going to be those guys. And they are all doing things that are pretty stunning to us lay people in this world. Look at Anthropic. This week, it announced Claude 3.5. They always announce Claude in stages. This is what's known as stage one. It's massively better than the one they launched six months ago at 10% of the cost.
Well, it's way beyond Moore's law in terms of how fast the cost is shrinking and how fast the scale is growing and how fast the competence is improving. I mean, that's what Gary Marcus never acknowledges. And I wish he was more balanced because he does make some valid points about the architecture. But it is not predictable, even six months ago, that six months later we would be where we are. Things are moving so fast that a six-month prediction is out of date quickly.
Well, we all know the Harold Wilson quote, another fellow Yorkshireman, Keith, about a week being a long time in politics. What about three years in tech? What should we conclude? Does it suggest that we simply have no idea? I mean, whenever... Technologists talk about the five or 10 year window. I always think that they're basically throwing up their hands and saying, I have no idea what the future is. I'm just speculating.
Even the best thinkers in this space have underestimated how fast things will change. So, yes, they themselves are surprised that what they're building can do what it does, because that wouldn't have been predictable based on all of the thesis you had at the beginning of building it.
How is this? It's still no evidence to me as not really a technologist, not a Silicon Valley person really like you. But this is actually having any impact on ordinary consumers like myself. I still, whatever anthropic improvements are, however much better Claude 3.5 is, and then we'll have Claude 4.0 and blah, blah, blah. What difference does it make? I'm not using this stuff. I still don't see products, mainstream products that are changing the world like Jobs did with the iPhone.
So we have a comment, by the way, that Leopold used to work for Sam Bang when freed previously. Is that a joke? No, I think it might be true. And then he was fired from ChatGPT and he's now running a hedge fund.
I think he's a bot. But, you know, back to the real question. Look, if superintelligence happens, the paper he wrote, which we can take with a pinch of salt, but I don't think you can take the ideas with a pinch of salt, claims that national security itself will be at risk because governments will have access to superintelligence that can do things humans can't do. And he posits a competition between China and the USA.
Maybe he's an invention of the Pentagon or of Peter Thiel or something like that. Let's go back to the real world. I'm not convinced by it. I mean, it's interesting and amusing, but I'm not sure how relevant it is. There's real news this week. The Surgeon General came out with a warning or suggesting that social media should come with warning labels. Mike Masnick, one of your fellow libertarians, Keith, suggests that he's wrong. What's your take on the surgeon general's remarks on social media?
Are you saying that... any doctor or specialist A researcher can't determine the impact of anything on the minds, the mental health of a child because they're not a child?
Well, the observer's biases influence what they see, as we know. Just like you accused Silicon Valley of seeing what it wants to see. Well, that's actually true of everyone based on their skills and their knowledge. So Masnick throws into the field in this essay a bunch of evidence to the contrary, which seems very compelling to me. which is the benefits of social media and the argument that many of the ailments that are perceived as being caused by social media actually have other causes to do with... Yeah,
I'm not a libertarian, and I don't always agree with Masnick, but I think he's right on this one. The idea of... it's not going to be like a packet of cigarettes anyway. And kids don't pay any attention to what, to any warnings on the screen. So it does seem again, another, if not, I don't know if it's not a storm in a teacup, but a certain kind of hysteria that seems to affect everything about America these days. And that, And of course, it comes out of The Age of Anxiety, a bestselling book suggesting that social media... There's clearly something going on, though, with the mental health of young people and the general cultural atmosphere. But to blame it all on social media, I think, is problematic.
One of Signal Rack's investments is in a company that provides therapists to schools to work with teenagers. And that company is booming because schools are liable and therefore want therapists to avoid bad outcomes. Very prevalent in my area here in Palo Alto. So I think the society drives fear. and paranoia and legal liability, which drives narratives about what bad could happen as opposed to what good is happening. So we don't really see optimism in the narrative. And when we hear optimists, we accuse them of being zealots or deranged, because obviously optimism is crazy.
I think, well, you're just reversing the fair point. I'm not sure anyone who has any optimistic thought is considered mad. Are you suggesting that's the current case?
Well, the more optimistic you are, the more in doubt is your sanity. Like this guy, Leopold, He's both optimistic and pessimistic. He's optimistic about technology and pessimistic about nation states and likely war. But most people are focused on his optimism. Actually, I think I'm the only person who mentioned his pessimism ever.
Yeah, I'm not sure he's more optimistic or pessimistic. He's just... he's deterministic. I mean, he seems to think that we're going, you know, as, as Gary Marcus said, that the straight path is inevitable. Yeah. What else is so, so I think you and I are probably in agreement on the need to control social media. And I'm guessing that Ash and Brenna would probably want to ban TikTok probably sees that as a danger. There's a lot of interesting news this week in a more positive way about videos. You have a tech crunch piece about, DeepMind generating AI soundtracks and dialogue for videos, for AI videos. And then there's an interesting piece from Neiman Lab about whether the news industry is ready for another pivot to video. Is it video's moment again, Keith?
It's like the Brazilian economy, always about to happen.
Words and timings
It'sliketheBrazilianeconomy,alwaysabouttohappen.
Speaker 1
And, you know, you look at the numbers around YouTube and TikTok and Instagram moving to video and Facebook having reels now. And this show. And this show. Clearly video is easy to make, easy to distribute, free, pretty much free. That'll be helped by this week's Startup of the Week, which we can get to later.
But yeah, no, video is on fire. The DeepMind stuff is closer to OpenAI Sora, as in it's really about the creative arts and producing content using AI for the creative arts. And DeepMind's a serious... very focused on specific tasks, kind of an AI company as opposed to...
Yeah, but DeepMind is ahead on the things it focuses on. You can't really compare the video of the week this week. There's a debate about the future of generative AI, and the woman in that debate is very thoughtful about explaining the difference between generative AI and, oh, it's not the video of the week, but it is a video.
Well, I mean, if the mind have their way, we have AI that will be able to produce this without you and I actually having to be in the room. We can just program it.
So we might even be able to create Leopold Aschenbrenner. Maybe we should create him as a guest. Somebody should make him. I think someone has, Keith. Before we get to product of the week, which is part of our optimism on the future of video, there is one rather sad note, Keith. You bought, you were one of the early adopters of the Vision Pro, and Apple has suspended work on it. Does this acknowledge it? Is this Apple's acknowledgement that this product is a bit of a failure?
It's only a real product in a solo consuming world. That is to say, it's so isolating that it's only good for things that you do on your own. Watching videos is a clear example of that. Reading is a decent example.
They're both talking about things that are real, but Aschenbrenner is extrapolating. If you did an Aschenbrenner on the Vision Pro, you'd basically be wearing a pair of glasses like mine, and the whole world would be available as a 3D mesh in front of your eyes as well.
You'd be surrounded by Leopold Aschenbrenner. It'd be like reading a Thomas Mann novel. More positively, you've given Startup of the Week, I'm not sure, have you given it to Apple or to somebody else or to the iPad? It's to Final Cut Pro. Which, of course, is the Apple software for making movies.
So if you have three iPhones in the house and an iPad... you could create a three-camera broadcast from three iPhones and suddenly go from single-camera video like we're doing on both ends to multi-camera video on both ends. So they've more or less made a TV studio free.
You will be able to still use them because, for example, I'm using a Sony camera with a great lens. The lens is about a meter away from me, but it can zoom. If I was using my iPhone, I could roughly create the same effect. It wouldn't be as good. So the cameras still are a notch above, but super expensive. I mean, my camera here costs as much as an Apple Vision Pro. So, you know, an iPhone I've probably already got. So there's no additional cost. I do think people are going to start using iPhones as cameras for live broadcasts now that this is out. And the other thing is it lets you create a recording of each camera to create multi-camera post-show edits to then go on and use Final Cut to edit.
From the point of view of tools for creators, it's a big deal. Obviously, creators are a small subset of the whole of the population, so most people won't even notice it. But if you're a creator doing especially live video that involves more than a single camera, we could have a couple of people now sat in my office as well as me, and they could be available to you on screen to switch to with an iPhone pointed at each of them.
This is the news industry ready for another pivot to video.
Words and timings
Thisisthenewsindustryreadyforanotherpivottovideo.
Speaker 1
Well, it's also challenging that it may not be up to the job. It's saying as more and more people get their news from video, Where does that leave the New York Times and the Washington Post, which are still largely the written word?
You know, what's going to happen there? And what competence does the New York Times have to evolve into a video producing platform as opposed to a writing platform for its employees? So I think there's some big issues there.
CNN, interestingly enough, they are more and more using YouTube and TikTok, doing short pieces pushed into those environments for free. And I do think they have to do that, if nothing more but for marketing purposes and their brand. But they don't engage an audience because they don't really know how to use video to engage an audience because ultimately news is not very engaging unless something big is going down. And then people switch on their cable.
It really, again, is another reason why you should be bullish on YouTube because that's the perfect platform for all this, isn't it? Yeah, yeah. Well, that's the stuff. So we're giving it to Final Cut Pro, not to Apple, although Apple is the parent company of Final Cut Pro. You don't even use Final Cut Pro. What software do you use?
I recently have been using DaVinci Resolve from Blackmagic, which I like a lot better. But we've got a comment from cinematographers saying Thomas Mann says, Should that be Thomas Mann with two Ns? I think it should. Death in Venice plot. We should be careful not to fall in love with Leopold's predictions.
We are falling in love. That was a good point. We are falling. I think we've fallen head over heels in love with Leopold Aschenbrenner. And that brings us to the X of the week. And, of course, who is the author of the X of the week? It is a son, Leopold Aschenbrenner. I think, Keith, you've already fallen in love with this guy.
Yeah, I mean, what he said in this post was virtually nobody is pricing in what's coming in AI. I wrote an essay series on the AGI strategic picture from the trend lines in deep learning and counting the OOMs to the international situation, and the project sounds a bit science fictional, a bit peculiar. And then he talks about what is the project, And then the free world must prevail. This guy's a bit insane, isn't he?
We think he's young. I mean, he has the photo of a young man. We don't know if he really exists. Could be an old guy. Could be Steve Gilmore or Gary Marcus or something.
But, you know, clearly nation states dominate the globe and they are all going to be rushing to try to use artificial intelligence in the military context. There's no doubt that's true. Well, they're already using it. They're already using it. They're going to use it more and it's going to get more powerful. With robotics, they even get to arm AI physically. So, you know, it's not wrong to discuss that. But all it does for me is it amplifies the need for the human race. to tell their governments that war is not an outcome we want. And please don't use this to, in quotes, win. Because if you try and win, we're all going to lose.
So you blame the governments for this? It seems like this guy's insane. I mean, to talk about something called... the free world must prevail. Superintelligence will give a decisive economic and military advantage. China isn't at all out of the game yet. What kind of nonsense is that? I mean, China wouldn't be out of the game. China is probably ahead of the United States, which is not particularly surprising or worrying. Why should we worry about them?
He also makes the point that open source fuels foreign rivals because it's open. And he also makes the point that security around the key part of a model is called a weighting. that the security around weightings is so poor that every weighting on the planet will be stealable by super intelligent bots. So none of that is really wrong. It may be alarmist, but there is kind of an element of truth about it. So one has to at least, if one was in government, especially in security, you would probably have to read this and figure out what is valid and what isn't. And some of it will be valid.
But it's just technology. I mean, it's the same as when industrial states invented industrial weapons, either in terms of the production or their output. I mean, there's nothing particularly dramatic about this, is there?
We may never see him again because this may be a one-off. On the other hand, he may double down and become important. So I'd say watch this space. He's a bright kid. He's young, therefore prone to exaggeration, as all young optimists should be.
Well, I'm not going to say two World Wars and one World Cup because England's lost many World, not World Wars, but World Cups to the Germans. So I think we'll end there, Keith, because otherwise I'm going to get banned. Have a great week and we'll see you again next week. Bye.