Speaker
I've got sunshine on a cloudy day When it's cold outside, I've got the month of
Transcript Viewer
Oct 7, 2023 ยท 2023 #33. Read the transcript grouped by speaker, inspect word-level timecodes, and optionally turn subtitles on for direct video playback
Speaker Labels
Edit labels for this show, save them in this browser, or download a JSON override for the production folder.
Human Transcript
Blocks are grouped by speaker for readability. Expand a block to inspect word-level timing.
Speaker
I've got sunshine on a cloudy day When it's cold outside, I've got the month of
Speaker
October the 6th, 2023, another Friday, another That Was The Week with my old friend Keith Teer, the tech sage of Palo Alto, another terrible AI generated image. I don't know why you keep on doing it, Keith. And another week where AI dominates. You're saying that the story of the week and the story of the year, perhaps of the decade is AI. What happened this week, Keith, to make it such a big AI week? Well, I call it AI a bust because there's really two competing narratives this week. One is that venture capital is still heavily on a downward trend at all stages. And the other is that AI companies have been funded at a rapid rate at high valuations. And what happened this week is a lot of different things happened. The first thing, probably most significant, is that ChatGPT disclosed its new version. It's still part of ChatGPT 4, but it's now what's called multi-modal, which means that it can deal with images. For example, you can sketch the drawing of a website, upload... Did you make this terrible image from it? I did not. I used Mid Journey for that, which is, you know... The people watching, I think, Keith, there will never be... No one will be confusing this with Vermeer or Rembrandt. When you see that sun in the distance, Andrew, that is the AI and the bust is in the foreground. So it's like head for the promised land. When I think of busts, I think of other kinds, but that's another story. So back to the big story. The art's no good, but the tech is impressive. Yeah. So ChatGPT basically has stepped it up yet again. I said last week, there's lots of rumors that they already have what they consider to be artificial general intelligence version of their software. They're rapidly iterating. They're including now voice. You can speak to it on the iPhone and it will speak back to you instead of typing. You can upload images from your iPhone camera that it can interpret. And for example, it can write code. If you wanted a keen on website, you do a little sketch of all the sections. It will write the code to create that website for you with what's called CSS, which is the design language of web. And so it's just becoming... Fast. That's the astonishing thing. It's happening super fast. And it means it's going to be an assistant for almost anything. My son, my son, Luke, was doing homework on waves of immigration into the US this week, focusing on the different experiences between Italian, Irish, Asian and Central American immigrants over different periods of time and asking lots of questions to compare the experiences and difficulty levels of all those different things. Is it worrying your oldest son? He just graduated from Syracuse. He has a software job in the valley. You know, he's not using it yet because most companies have not yet understood how powerful it is. It's almost like cheating if you're an engineer to use it. But it won't make him redundant, though? I mean, that's the big question is, is it going to empower guys like your son who are just starting out as software engineers or is it going to make them redundant? I think that relates to the extent to which they embrace it. If they embrace it, it's going to massively upgrade their capabilities and make them available for much harder jobs. And you've got to be careful there because they have to learn through mistakes like we all do. So you don't want to offload everything. But what I found as I use it doing SQL programming is I learn from it as I give it problems and one has to be super specific on what the problem is. So if you're not specific, you can't get the answer. So that requires knowledge and intelligence even to ask the questions. But if you ask the right questions and it helps you, you learn from how it helps and it just accelerates your education in any domain very, very, very. So you say in the newsletter that you've been using it at SignalRank for programming, for coding. It's reduced. You reduced your code by 90 percent while increasing how good the results are. Does it make code kind of redundant? No, it writes code. But my code, because I describe myself as a B plus programmer in SQL, my code tends to be very linear and lengthy. You know, first do A, then do B, then do C, then do D. And in my case, I'm looking at annualized cohorts of venture funded companies. So I have to do the code for each year. What it can could do is create loops that could do everything in a single set of code that was shorter than my code and covered all the years and all the variations in a single piece of code. So it shrunk the code and made the code more resilient while still, you know, having me setting the goals and the works. So it's truly amazing, honestly. It's a big deal. It's a big historical deal. I know, Keith, with your Marxist background, you like to historicize all this. You're intrigued by the idea of the significance of AI versus the mobile revolution. And you cite in this week's newsletter a piece by Rex Woodbury, the mobile revolution versus the AI revolution. Is it a according to Rex Woodbury than the mobile or equivalent at least? It is because he makes the correct point that the mobile revolution is really the end of the Internet revolution. At least. So when's that begin? 2012, 2013? Well, really, 2007. And you could even go further back to the BlackBerry, to be honest, in the palm if you wanted to. So it's been a long time. I always am suspicious when somebody calls the end of a phase because I still think there's probably... So is he presenting mobile in historical terms as how we think of Web 2, that they're the same thing? Web 2, yes, but also the mobile distribution of Web 2. Before mobile, we had maybe a billion PC or Mac users. Now we have over 4 billion smartphone users. So just the sheer scale of the distribution means the economics of being successful are huge and fast. The other thing with mobile is distributing software became as simple as putting it in an app store. So you could directly address 4 billion people just by shipping something. That was never true before. So the scale of that... You said in the introduction it's beginning to impact on valuations. I want to talk about the anthropic valuation that you talk about. But I always used to say that to you. I said, oh, isn't AI changing everything? And when you were so gloomy about valuations broadly and investment, what's changed on that from? Why are you slightly more optimistic now? Well, I was never really gloomy about AI. But you always used to say, well, it's not that important a piece. Is it becoming larger, more important, more central in the investment world? I don't remember myself saying that, but I believe you. Maybe I did. But it's becoming more central just because it's where innovation is happening in a profound way that will impact human beings. You talked about Marxism. There's an interesting discussion within social science about the extent to which technology is the catalyst for human progress. And if you think about almost all human progress requires tools that make it possible to do things faster and better than before. But the tools don't do it all by themselves. Simon Johnson, we talked about him. He was on power and progress. He talks about how sometimes it can also compound inequality. But in that sense, technology is human. It's created by humans, used by humans to improve the lives of humans. The economic layer, where the rich get richer and the poor get poorer, I think that's a separate conversation, which is an important conversation, but it's not the same conversation. So I'm a massive believer that there are technical catalysts to human change. And those technical catalysts are themselves human. They're not. They didn't come from... Yeah, I mean, it depends how you... I mean, it's a semantic issue. We talked about anthropic. To what extent are these remarkable valuations of wannabe AI players like anthropic? I mean, they are, I guess, real players, but not as much as open AI. How is that affecting the broader market?
Speaker
Massively. I mean, if you think about the economic world of tech, there are there are fundings, there are mergers and acquisitions, and there are IPOs. And then there are product initiatives from companies at all stages. And the pace of change across all of those is massive. Anthropic is required by Google and Amazon invested in them as well as a protection against becoming irrelevant. And open AI is so good that it does threaten the irrelevance of other players. It really could own a very large part of the pie that AI will create, probably not the enterprise part. I think there's a lot of room for vertical applications in the enterprise, but it will play a role in the enterprise. What about Google? Google did a few things this week. Firstly, it launched BARD within Google. It had a big conference where it created the new Google phone came out, the Pixel, which also includes Google Assistant that has AI within it, BARD. And they also announced the integration into Google workspaces of AI, so in like spreadsheets and Word documents and email and stuff. So Google is deploying stuff, I suspect, just from a quality point of view, a bit like Bing. It isn't as good as open AI. So you tend to not want to use it because you know there's something better. So why would you use something not as good? Are there going to be losers there? Can you be second in this market? Can you be an anthropic and still win? Yet to be proven. I don't know. Hard to imagine, given the winner-take-all history of tech and it's becoming an increasingly a winner-take-all economy. I think it's going to be closer to the portal era when you had Yahoo, Google, Excite, Infoseek. I think there will be different players. Yeah, but all those, the Infoseeks and the Yahoo's got destroyed eventually by Google because they came along with better technology. Eventually, that's true. That's exactly what happened. And that could well happen. Open AI is kind of embedded with Microsoft, which makes... In and out of bed. Is there any potential for a massive blow up on that front? Well, I think it's not a relationship that has shared incentives. So I'd be surprised if that ended well. But I think in this case, Open AI is strong enough to come out of that okay, which is not normal with Microsoft. But if Open AI dominates the AI economy, maybe they'll buy Microsoft rather than the other way around. So that would require lots of ifs and buts to happen. That's science fiction. You mentioned about technology making the world a better place. You and I have talked about that a lot. You have an interesting piece from Azim Azhar. We both know him quite well, the Golders Green based technologist from North London. He's been on my show several times. I know you know him. How does he believe that AI can fight inequality? One of the great questions. We talked about your son. If your son loses his job from AI, God knows what he's going to do. It'll only compound inequality. Yeah. So he was interviewing the founder of one of the big AI companies. I'm just blanking on his name. Sorry.
Speaker
So Stability AI, which is Imad's company, is another player, especially in the visual space and is open source. So what Azim is arguing is that open source, if you roll the clock forward five years or so, is going to be responsible for lots of AI projects that are not driven by open AI or any of the major players. This is a fairly popular theme. We've heard it before. We heard it in Web 2. We heard it with Web 1 and it never worked out before. So why should we hope it will work out this time? It did work out with operating systems. So most of the internet runs on Linux. Which is open source. But that didn't create any more equality. Well, in a way it did because a 12-year-old kid that's semi-competent can spin up an entire web infrastructure in their bedroom for nothing because Linux is free. All they need to have is a computer. And so it does really democratize. It really does. Yes and no. But then that 12-year-old kid, the Mark Zuckerberg, eventually comes along and creates a walled garden winner take all company. Well, that's why Stability AI is interesting because it's an open source company. By the way, Facebook is trying to play that role as well. It open sourced some of its llama code this week again. And in the Facebook case, one doubts the sincerity. In Stability AI's case, I think it is actually sincere. And there is a fighting chance that the models will shrink down to be able to run on websites and mobile phones. And the open source cause will dominate that world. I'm not holding my breath, I have to admit. I mean, open AI is anything but open. That's correct, which is why you need open source code so that other people who don't want to pay open AI have a shot. But if open AI is the leader, what's the value of open source code? If open AI has the most valuable, the best code? Well, developers rely on open source code to do things even better still. For example, open AI itself has a lot of open source code inside it. And it would have been really hard for open AI to exist without that open source code, models, for example. So I think it's important to understand the role of open source in developer. But it's the old story. We had the same stories about, say, when the first iPhone came out, it was all built on public knowledge, public research, public IP, and yet Apple never gave anything back. So why should anything be different? Open AI aren't going to give anything back to the open source community. Apple actually does, Andrew. They all do. So this is a bit geeky, but deep inside all Apple code is a Linux distribution called BSD, Berkeley Standard Unix. And the version Apple uses is called Darwin. And so whenever Apple upgrades the operating system, they contribute everything that is backend into the Darwin code set so that other developers can build different things on it. So it would be possible to download Darwin and build your own macOS. Yeah, I understand. But with the new iPhone 15, for example, none of those sales, I mean, Apple is what, a $3 trillion company? No, their sales goes back into the public space. Yeah, but I think the mistake or the
Speaker
kind of element that you're glossing over there is open source isn't a competitor to big companies. Open source is just a methodology for building code. And they kind of coexist. Even the big companies use a lot of open source. And if you and me wanted to start coding something large, complex, we would definitely start with some open source software stack. And that would save us years and years and years of development, 10 years maybe. So it's like that old phrase, we sit on the shoulders of giants. The open source stack of code is the legacy of collaborative developers over many, many decades, putting stuff out there for free use. Well, speaking of Apple and the iPhone, I know you, Keith, always buy the iPhone. You say I don't because I'm Jewish, which may be true. There's a very good review of the iPhone 15 Pro Max camera. You've said to me, and I'm thinking of buying it in spite of my religion. Is it a major upgrade? You're both an owner and you read this stuff. Is this a big deal, the camera on the iPhone 15 Pro Max? Yeah, it's only that model that is the breakthrough. So that's the one to get if I buy one, the Pro Max. If you want a better camera, that's the only one to buy. If you have anything from, I'd say, a 12 to a 14, and you don't care about the good camera, there's probably not a strong reason to upgrade. But the camera, I took it to Mexico with me last week and to all kinds of events and used it for both video and images. And it's just fantastic. It works in low light and creates very good images. The sound is fantastic as well.
Speaker
It's a studio in your pocket, basically. And you can't knock it. The review that I publish a snippet of is a massive review. Even my snippet is quite big. And it's written by the founder of Halide. And Halide is one of the best camera using apps on the iPhone. And he goes through every detail of why it's awesome. And I recommend anyone thinking of buying one, just read that review. And you're probably going to write a check for close to $1,500 at the end if you can afford it. Well, this was supposed to be the week of AI or bust software, but we mentioned hardware. I mentioned the iPhone 15. Another interesting piece, I was on a Ray-Ban new smart glasses that will be able to translate text. Are we on the brink of a hardware AI revolution too, Keith? Well, the Ray-Ban glasses, and one's always skeptical of Facebook for all kinds of reasons, but the Ray-Ban glasses seem to be closer to a real world application than anything else they're doing. The Quest 3 was also announced at their event this week. And the Quest 3 has flipped from being an immersive VR headset to be closer to Apple's idea of a look through headset where the room you're in becomes part of the experience you see. And you're no longer separated from the room you're in. So I think the trend is now set in that virtual reality is not the path, but augmented reality is. And the Ray-Ban glasses are a lightweight $300 augmented reality experience. You can get prescription lenses. You can get lenses that darken in the sunlight. You see an overlay on the real world that's mainly text bubbles for metadata. So one of the uses is standing in front of a historical monument and seeing the history of the monument come up in your eyes. And it also has a camera so you can record events and they get saved to the cloud. What do Ray-Ban, if you put the glasses on, what would they tell you if you're watching Manchester United? They'd tell you to look away, I think, right now, wouldn't they? You're still showing the graphic, by the way.
Speaker
Well, we're moving on. Another hardware breakthrough, startup of the week, EV Boat Startup Arc, TechCrunch, brilliantly original headline, wades into water sports with 70 million in fresh funding, a lot of water metaphor there. So why is this the startup of the week? What are they trying to do, Arc? So before I answer, I have to tell you we're still seeing the iPhone 15 on the mainstream. I'm not. I think you have a problem with your computer. Oh, well, I'm pleased about that. I hope the recording shows yours and not mine. So Arc is an electric vehicle company, but it doesn't do cars. It does boats, speedboats. They cost a lot of money, $300,000. Yeah, very much of a Silicon Valley play here, I'm guessing. Yeah. And even then, partial, because I'm not sure tech bros all like speedboats, or have anywhere to... Well, a lot of them do. Where are they going to take them in the valley? That's the... I mean, you can't go on reservoirs and the ocean doesn't seem appropriate. So you're going to find somewhere, maybe Tahoe. But anyway... They take them to Miami with them when they leave, when they're old. So yeah. So I made it start up for the week because firstly, it raised a lot of money. And hardware companies like that, with a lot of capital expenditure and research and development, are hard to fund at any time. They're even harder to fund right now. But this company's leadership managed to raise a significant amount of money to take it to the next level. So it probably means that product is going to survive and prosper. And well done to them. It sounds, to extend the liquid metaphors, it sounds a bit frothy to me. Although I'm not sure you're suggesting there isn't much. This is the kind of funding you'd expect during a boom rather than a bust. Yeah, yeah. I was pleased for them and not, you know, a little surprised that they pulled it off. It must mean that they've got enough sales to justify the effort. Are you going to buy one of their boats, Keith? I'm afraid I'm not. I'm not water-friendly. My swimming capability is about equal to a 10-year-old that just learned to swim. Well, you just you grew up in Scarborough by the sea. Did you not go in the sea there? It's filthy. Well, that's no excuse. You're filthy, aren't you? Only in the good sense of the word. Well, X of the Week, we're no longer using the T word. Interesting. And it's a nice coherence to the show and the newsletter this week. It's all about AI. And the X of the Week is from somebody called Wes Gurney. Do language models have an internal world model, a sense of time at multiple spatio-temporal scales? In other words, do languages make sense? Your X of the Week triggered a debate with the great Gary Marcus, who doesn't think. So tell us about the X of the Week and what the issues are, Keith. So Wes Gurney, if one clicks through on that, you're going to see a paper, a depth paper that shows that large language models can have an awareness of the geography of the earth and time. And what does that mean? Well, it means they can have a sense of the physical world. From just the language. I don't understand. It's not language. It's physical. It's really about the geography and terrain and putting data in time and place, in the right place, knowing that there's a correlation between a place, a time and some data. The data is kind of alive. Yeah. So it basically has to have a world, a view of the world like we have in our heads. Like we know right now, it's nighttime in Japan. And we know that Japan is this island off the China, a set of islands. Yeah, but we only know that. We don't know that intrinsically. A six-month-old child doesn't know that. We only know that we learned it. Yeah. And he's making the point that large language models are learning it too. So they're going to be much better at mapping content to reality than they have ever been. Is this back to the old debate about whether AI can quote unquote think for itself? Because I know Gary Marcus, who you also cite here, is a skeptic on that. He's one of the world's leading authorities on AI. You did a debate with him recently. Yeah. So he reacted to this paper in the negative. He showed lots and lots of examples that large language models don't understand space and time yet. But he did what he always does, which is he creates this image of the perfect AI. And then he criticizes the current status as not being that. So in a sense, he's always right, right? Gary Marcus is always right. But in a sense, he's always wrong. But he's always wrong because he's not embracing the positive and figuring out how to make it better. He's just pointing the finger that it isn't perfect. So it gets irritating a little bit. And I don't think he realizes how irritating it is. He really needs to either develop something on a different path or make suggestions to make this path better. I suspect he's inclined to the first, not the second. But being a constant finger pointer, I don't think it really does him any favors, even though he's right. Because... I think he's starting to understand now as not just a skeptic of AI, but an educator. It's one thing to trash AI. It's another to be critical being an AI expert and being articulate and quite a character. So it might be a hard thing for him to relinquish. Yeah. He also makes the point that there's nothing very new about this. I still don't really understand. Well, it isn't about... I don't understand. Maybe it's me, maybe it's you. What exactly is he saying? So let's read one quote and I think you'll get it. Gurney and Tegmark seem impressed with the results. The fact that geography can be weakly but imperfectly inferred from language corporate is actually already well known. So basically, large language models read language. And so if it reads Shakespeare, it can probably tell you about Shakespeare, but it wouldn't correlate Shakespeare to London. And so what he's saying is that large language models can correlate their learning to continents and even specifically to cities. And so content can be associated with places, not just with the content. But how? I mean, unless that language involved... I mean, Shakespeare, for example, Shakespeare's London could have been a fiction. Well, there's lots of signals. If you think about everything that's ever been written about Shakespeare, there's enough signals to be quite specific about the historical context of Shakespeare. So I think it's all about reading the signals with a view to creating a real-time world map that starts at year zero and comes up to date and being able to understand how to correlate data to both time and space. That's really what it is. I still don't understand. Finally, Keith, and maybe it's me being stupid. What would Shakespeare say about the iPhone 15 Pro Max? Would he say buy it or not buy it? I think he'd say begad, get one.
Speaker
I guess you'd say, what can make me feel this way? It's my girl, my girl, my girl, my girl. I've got you.