Speaker
I've got sunshine on a cloudy day When it's cold outside, I've got the month of
Transcript Viewer
Apr 4, 2023 ยท 2023 #11. Read the transcript grouped by speaker, inspect word-level timecodes, and optionally turn subtitles on for direct video playback
Speaker Labels
Edit labels for this show, save them in this browser, or download a JSON override for the production folder.
Human Transcript
Blocks are grouped by speaker for readability. Expand a block to inspect word-level timing.
Speaker
I've got sunshine on a cloudy day When it's cold outside, I've got the month of
Speaker
April the 1st, a new month, old subject, and an old show, and I've got sunshine on a cloudy
Speaker
day, and I've got sunshine on a cloudy day, and I've got sunshine on a cloudy day, and I've got sunshine on a cloudy day. The letter, I guess, your newsletter, was driven or inspired or triggered by an open letter signed by a number of people, prominent people, on pausing giant AI experiments. You beg to differ, is that fair? Yeah, not just me. I mean, I think that open letter got a lot of publicity. Gary Marcus, who was one of the signatures, I saw him on TV at least three or four times on Thursday and Friday. He's the gentleman I debated on IQ squared. Yeah, and he's been on Keynote several times. He's one of the world's leading and perhaps most controversial authorities on AI. Yeah. So, I mean, what I did is wrote a counter open letter, acknowledging many of the concerns, by the way, about the quality of large language models and their ability to be accurate, but instead of calling for them to be paused or stopped, called for them to be improved through further experiments. Yeah, and you wrote, we encourage all AI labs to continue exploring and developing AI systems beyond GPT-4 while prioritizing safety and ethical considerations. Did you get lots of television invitations, Keith? Were you on NBC and ABC with Gary Marcus? Well, for anybody listening, this is a draft open letter that has not yet been published and has not yet requested signatories. So, I put it out there as a kind of a boilerplate, if you will, of an alternative approach. And now the question is, should I put it in a place where people can sign it and gather signatures? Well, I mean, in that obvious prioritizing safety and ethical considerations, I mean, that goes without saying. What are you saying that's in any way original? I'm not really saying... Sorry, I'm recording live, Jenny. I can't actually do anything. That was my wife wanting me to answer the phone. That was convenient. So, you avoided my question on what you're saying original, Keith, on this open letter. Exactly. So, basically, yeah, it is common sense. And Sam Altman has been pretty clear, by the way, that they are very focused on building something of importance that benefits the human race, and they're not really interested in building something that is in any way a problem for the human race. But again, that goes without saying. And I mean, it also goes without saying that Altman's doing this to make huge amounts of money for him and his investors. It's a private company, OpenAI. It's not for the public good. Well, I think there's a correlation between the private value of OpenAI and the public good. I don't think it gets to be valuable unless it's a public good. Well, I mean, you can argue the same. Google came out with all this nonsense about the public good when they started, and it's certainly debatable whether Google's been for the broader of the Internet. But that's another issue. So, what did this letter say, the open letter asking for a pause? What was the point? Well, through the eyes of the authors, the point was to mitigate the risks of large language models influencing human behavior or thinking by pausing further development on the models. They weren't asking for chat GPT-4 to be closed down. They were asking for chat GPT-5 to not be developed. I mean, we know that Google, for example, are working on their own version of generative AI. Is this a pause letter for everyone or just for OpenAI? It's for everyone. I mean, it's a little bit of an incoherent letter. It's for everyone. Who was the lead on it? Who was inspiring it? The lead is this organization, what's it called, Institute for the Future or something like that. It's called the Future of Life Institute, which, by the way, has a somewhat shady reputation. And there was a lot of blowback on the letter because of who published it. And initially, at least, there were lots of fake signatories to the letter, high profile people that actually didn't sign it. So, it wasn't very well. You're wearing a Google hat versus a Microsoft or OpenAI hat. The call to pause this certainly would be for the benefit of OpenAI because they already have a lead. It's like pausing a race after a few laps when one person is way ahead of everybody else. Well, and it's also counterintuitive from the point of view of science. Most science goes through stages until it reaches a point at which it's as close to perfect as you can get it. To ask any science to pause midway through due to the fact that it has not yet reached the end is just an erratum. There is no end to this. I mean, I guess you could argue that this would be the kind of letter that might have been necessary with the development of atomic weapons. But it's hard to pause the science. You have some links in your newsletter, Keith, to a vice piece that describes this open letter as a huge mess. What I don't understand is how it even got any press. It's obviously a farce.
Speaker
Isn't the reverse true, that AI is not human beings? Is the open letter suggesting that the humans developing AI are flawed or that the systems themselves are flawed? No, that the users are flawed. For example, open AI could, in quotes, send misinformation out to Twitter using bots in large numbers, saying something that isn't true, and we humans will believe it to be true. That horse has already bolted. I mean, that's what Twitter is, full of lies and misinformation and propaganda of one kind or another. So why does open AI change anything? And guess what? We humans are not so stupid, and we tend to spot things as being lies, unless the lies are lies that we're part of, because it reinforces some ideological view that we hold. So this is really more about humans than it is about AI. AI is a way of increasing the capability of humans to do all kinds of different things. So what does the letter tell us about humans? That they're nervous, that they're pathetic, that they're divided, that they're disorganized, or that they're led by their own interests and they disguise those interests in ethical language? I think you have to distinguish between the authors of the letter as humans and the humans that the authors are concerned about. The authors of the letter are a little bit like Plato's philosopher kings. They believe they're superior, and therefore they have the right to call a stop, due to the fear that less intelligent humans will be badly influenced by AI. So they're basically taking a kind of a superior paternalistic view to the entire human race, saying you can't trust these people with this AI. And that at the core of it is an elitist point of view. I have to admit, I'm guessing, Keith, that this may be grist for your own mill, in the sense that you're against that, so you naturally associate this open letter. Aren't they also suggesting that we can't trust the companies? There was an interesting piece, you didn't link to it, but an interesting piece in the Wall Street Journal this week about the contradictions of Sam Altman, of course, who runs OpenAI, probably the most influential and interesting figure now in all of tech. Yeah, that's a great review, by the way. I mean, it echoes a lot of what we said when we talked about how smart Sam Altman is, which is it shines a light on... I mean, he's clearly smart, whether he's trustworthy. So the issue is also, and I think it would be fair to say of this letter, Keith, that we can't trust the Sam Altman's of the world with our future because he's full of contradictions himself. Well, that's the elitist view. It's basically saying... Because he isn't doing the things that we would do if we were running this. So it's all about... Well, that's counter elitism. I mean, if there is an elite in the world today, it's the Sam Altman's and the Keith Teers of the world. No, I don't agree with that. I think democratization of information means treating every human being as equal as a recipient of information. And the minute you start to diagnose some human beings as less capable of receiving information, you are turning them into basically animals that have no brain. So I think at the core of this critique of AI is an inherently anti-human point of view. Is this just the sensitive northerner in you, Keith, the Yorkshireman who's always been patronized by the all-knowing southerners like myself? Well, Andrew, you've hit a nerve there. I have hit a nerve. You know what we say to people like you? What do we say? Shine on. Although you don't need to sign on, Keith, because you're a wealthy northerner. I prefer the Pink Floyd version. Shine on. Well, this debate is not going away. Keith is very clear. He doesn't think we should pause. I agree, actually. But, I mean, everyone talks about prioritizing. No one's going to write anything suggesting that GPT-4 or generative AI shouldn't prioritize ethical considerations. But everyone dresses ethical considerations up in their own self-interest. This is certainly the theme of the week, Keith. Lots of interesting pieces you link to. Vinod Khosla, who, I don't know if he's a friend of democratization. He's certainly part of the new financial elite. Not always a man who does well in terms of marketing and PR to the rest of the world. You have a link with him on how he believes AI will free humanity from the need to work. What is Vinod saying here? Well, I definitely picked up on this one due to my own philosophical view of work. Where I think of work as a necessary evil until technology gets to the point where it relieves us of the need to work. So, I'm very aligned with Vinod there. You're right on the democratization stuff. I mean, he definitely doesn't want his beach democratized outside his house. So, he's a complex individual. But on this, he's 100% right that if it's possible for us to adopt large language models and chat interfaces and connect them to functional parts of the world like robots, it's very likely that the meaning of work will change. That we will all have a friend, if you like, that can help us achieve tasks that will lower the amount of time we have to spend on those tasks. And that that will revolutionize work. It revolutionizes where it creates new work. And of course, the age-old thing, we talked about this a million times on and off the show, is it's transferring wealth and power to the owners of this new platform system technology, whether it's Altman or Koestler, whoever. I mean, how does that... They're still going to be work. I mean, people operating these systems are going to be working. Yeah. Why is that different? I think if you follow the money, think of it this way. I'll just use one example. The California pension scheme invests in Koestler's fund. Koestler takes that money and invests it in AI, which then makes a big pile of money for Koestler Ventures, of which 100% of the original investment goes back to the California pension fund, plus 80% of the profit. And that goes into the pockets of pensioners. So actually, the idea that the rich get richer through this is partly true, because 20% of the profit goes to Koestler and his fund. But it's also true that society gets richer through these mechanisms. So now, in the future, and this is the Sam Altman profile in the Wall Street Journal, in the future, would it be good if 100% of the value went back to the people? Sam Altman says yes, and he has a project called WorldCoin. At least part of the inspiration of WorldCoin is to try to figure out how to make that happen. So it is true that as automation replaces human labor, more and more of the value created from automation should go back to society. Yeah. We heard this with the original internet revolution. I don't see much evidence of it. One man who's been following this clearly and a friend of mine and yours, Keith, is Albert Wenger. His last book was The World After Capital, which is describing this world. You link to a piece from him about thinking about AI. What does Albert suggest on this? What's his argument? So Albert wrote this blog post actually as a kind of a note to himself. And he published it so other people can see his thoughts. And it is incomplete thoughts. He didn't finish kind of an end-to-end narrative there. But he put some markers down. The most significant thing he does is that he focuses in on the fact that ultimately computers will do almost everything humans do better than humans. That's his core belief. And I agree with that belief. And I think it's a good thing, not a bad thing. And it's socially good because it frees up time for leisure and choice. But that's his core. And then he points to his book as a place where he elaborates on that theme at greater length. But I think it's a holding piece, not an end thought. I will say I have a private dialogue with Arthur about the application of AI to venture capital. And on that, he completely disagrees with me. Because you're threatening him. Your little startup, Keith, which is not so little anymore, is threatening Union Square Ventures. Maybe one day Albert will quote-unquote work for you. I don't think either you or Albert really need to work. You also link to a couple of interesting pieces on Stratechery, which is one of your favorite blogs or publications. This one, chat GPT gets a computer. What does that mean? Well, last week we talked a bit about this, that OpenAI released some plug-ins, all kinds of plug-ins. Expedia has one. OpenTable has one. Where applications can leverage the intelligence of chat GPT or its learning power for a specific purpose. And what that does is it basically links chat GPT to the Internet. And up until now, it wasn't linked to the Internet. It was kind of a closed environment. Now it's linked to the Internet. It can trigger events in the real world, if we count the Internet as part of the real world. So chat GPT getting a computer is really chat GPT being able to be linked to all kinds of other environments that have computational power. And that represents a shift at a high level. Now, this is a long article. There's a lot more in it than I just said. Well, it's more complicated. Can this actually be banned? You don't link to this because it came out too late. But the Italians yesterday banned GPT, which is an astonishing conceit, I guess, or hubris on the part of the Italian government to simply ban chat GPT. Are they banning, basically, if they're doing that, are they banning the Internet? You know, what they're trying to do is leverage Europe's privacy laws and use them to ban chat GPT on the basis of privacy invasion. I haven't drilled down into the details. It only was announced yesterday. And, I mean, obviously, it's not possible for them to actually ban chat GPT. But the fact that they want to is kind of interesting in and of itself. Wasn't King Canute a Northerner, Keith? King Canute was certainly an Italian, I would say. No, he wasn't. I'm joking. I'm talking metaphorically. I know you're joking. Wasn't he from Yorkshire, King Canute, or was he Norwegian? I actually don't know, Andrew. Is the King Canute metaphor appropriate that anyone now trying to ban chat GPT is like Canute, standing in front of the ocean and suggesting it should stop? Exactly, it is. It is exactly that. It's not going to stop. And as for the origins of King Canute, I am now in chat GPT and I will ask it. Where was King Canute born?
Speaker
It's probably Scotland. Oh, it's Danish. Oh, the Danes. King Canute, also known as Canute the Great, was born in Denmark around 995 AD. Well, we all want to be Danish in the next life. Anything else happening, Keith and Turk, apart from chat GPT? Your news of the week is, I have to admit, and this is not a criticism, a bit thin, not much is happening. You always tell me that I put too much in, so I thought I'd try to focus on that. Yeah, well, you know me, I'll always criticise you. The first two articles there are super interesting in the world of venture capital. Well, certainly from your point of view, because you're replacing the generalist seed VC. Well, not really. We're the support network for early stage investors. We support them continuing to have equity in the companies that they find. We can be seen as enemies of the later stage venture capitalists. I don't think we are, but there's a reasonable discussion to be had there. And what this is talking about, however, is generalist investors, people like me who are not really specialists in anything. I can't survive an era where deep technology is driving most of the new value. You're going to need experts in deep technology to make decisions about which AI companies to invest in and which AI companies not to invest in. So the death of the generalist VC is all about that. And Hunter Walk writes a piece that kind of is aligned with that point of view as well. I thought both were really interesting. I don't know what I think yet. I think that solo investors who are generalists, I feel it's a human thing. It's super hard to replicate what they do. I think the least threatened by automation are early stage investors because they're really reading signals that are very human and hard to... That's what you're doing at SignalRank, Keith, reading signals, talking about experts and vengers. Do you remember that Jose Mourinho described Arsene Wenger as an expert in failure? Is Albert Wenger, is he your Arsene Wenger? Are these traditional VCs now like Union Square Ventures? Are they experts in failure? No, no. Union Square Ventures is a great example of a fund that very early has a great track record of picking future... Yeah, but they're still wrong nine out of 10 times. Yeah, but they're right 10 out of... For every dollar they invest, they make 10. So they're right. Aren't you at SignalRank, aren't you reversing that? Not at the stage... From one out of 10 to five or six out of 10 using AI? Not at the stage they invest. At the seed stage and at the A round stage, you can't do better than them. By the B round stage, you can. So speaking of failures, the startup of the week is a failure this week. Good eggs. Why did you put good eggs as the startup of the week when they're failing? Well, you could say they're failing, but they've also reinvented themselves. They did what's called a cram down round. I mean, how many headline writers would preface that with omelette? They scrambled the omelette. So what they did is they... Well, first of all, Keith, sorry to jump in here. What are good eggs or what were they? Good eggs is basically a food delivery service farm to table. It's very, very high quality dairy, meats, you know, and associated products, produce, fresh produce, basically. And it's in my area. It's used by individuals who can afford to pay a bit more for better quality food. We don't use it, so I've never actually used it. So the poor people of Palo Alto, is this sort of the internet version of Whole Foods? Kind of, but in a subset of what you can get in a Whole Foods. It wouldn't do everything. OK, so it's a high end boutique-y food online service. So what have they done that's so interesting? Well, they've got a big business that loses money. So they've basically raised a small amount. I think it's about $6 million. I can't remember the exact amount. And in raising that money,
Speaker
they revalued the company to reduce the value of the old investors by 94%. So the new investors own most of the company. The old investors only own 6% of what they owned before. Isn't that a bit dishonest? I'm sure the old investors aren't very happy about that. Well, it's called pay to play. The old investors would have been given a choice to participate and chose not to. Why did I choose it? Because it's symptomatic of what's happening right now. This will happen more and more. The companies that were funded in the past at high valuations that they can't sustain post the correction, insofar as they survive, and most of them will not survive, but insofar as they do survive, which Good Eggs is, which is why it's once again a startup. It wasn't a startup a week ago. It's the survival, not of the fittest, but of the survival of the scrambliest, Keith. It is. Scrambling is the name of the game. So what you're saying to startup entrepreneurs who are in an existential death spiral is scramble your eggs? I always say to founders, doing a down round, even a dramatic one like this, is punishing to those who no longer want to invest. But from the point of view of the founders, it's almost a non-event because you get to start over. Almost for certain, the new investors will issue options to the founders to make them, if not whole, at least close to whole, as long as they execute going forward. Extending the egg metaphor, once the egg has been cracked, it can't be put back in its, not in its bottle, in its shell. So why not just crash the whole company and start with a new batch of eggs? Well, somebody believes there's real value there. The people put the new money in. I can't judge that. I've never, I haven't looked at it, but definitely somebody believes there's value there and prepared to put money behind that belief. You know, good luck to them. And anyone that wanted to could have joined in that investment, but chose not to. Well, finally, back, your new hero, your old hero used to be Paul Graham, but he seems to have morphed into Sam Altman. He is your tweet of the week. You've always had a great deal of admiration for him. And this is a significant, this is a substantial tweet. This is Sam Altman at his most philosophical. What is he saying, Keith? He's basically proving to the open letter writers that he is not the evil devil that they try to characterize him as. We're trying to prove anyone can write a tweet. Yeah, but he's acting on it. The first thing he's saying is, he uses this word align. So the technical ability to align a super intelligence is needed for a general intelligence. That's very chilling, this idea of a super intelligence. It's sort of post-human in a way, isn't it? Well, only if you think it's your enemy. If you think it's a super intelligence that you get to use in your work, it's an asset. So he wants it aligned to human outcomes is what that means. Yeah, but everyone's definition, that's in the eye of the beholder as well. What's human for Sam Altman is not human for others. Well, he's open to discussing that and having people involved in it. Well, that's very generous of him. What about the second? The second is various efforts should be coordinated so that they all do the first one, not just open AI. And that requires, in my open letter, I use the example of ICANN in the domain name field as a stakeholder opt-in stakeholder community that looks to govern or at least keep safe the internet infrastructure. I think he's calling for something similar to that structurally.
Speaker
But what's the danger of this all turning into another duopoly of open AI slash Microsoft and Google who are way ahead of everybody else? I think now the gene is out of the bottle. I think it's very unlikely. I think it's closer to the internet where everyone gets to play. Than it is to the web portal stuff where only a few get to play. So, yeah, I'm not worried about a duopoly. I also think open AI have done a great thing with mainly focused on developers so that their software benefits all other software, not just themselves. And actually, one thing, just to go back to your news of the week, Apple are not playing in this space and they've announced that their mixed reality headset may not appear at WWDC. Are Apple, do you think, freaking out about all this? They're not in the discussion. I mean, Microsoft's dominating it. Google is still playing in it. You connect with Meta. Maybe this has given a lifeline to Meta. But Amazon and Apple are very much out of the conversation, particularly Apple. Yeah, I think they're all out of the conversation, actually. I think what open AI has done is redrawn the hierarchy of important technology companies with themselves at the top. But we just don't know it yet. Five years from now, that'll be obvious. And then his final, the final point in the Altman tweet of the week was an effective global regulatory framework, including democratic governance. I mean, that's just more UN gumph, isn't it? That's not going to happen either. Well, it definitely won't happen if what he means by that is governments. He'd end up making the same mistake that Facebook made when they tried to launch their crypto coin, asking for government permission. The minute you ask for government permission, you're killing your project. So this should be done outside the government realm. You don't want to bring in your friend Lina Khan as the Tsarina of generative AI key. Yeah, that would be a very bad idea.
Speaker
Everybody sing!