Speaker
I've got sunshine on a cloudy day When it's cold outside, I've got the month of
Transcript Viewer
Nov 24, 2023 · 2023 #41. Read the transcript grouped by speaker, inspect word-level timecodes, and optionally turn subtitles on for direct video playback
Speaker Labels
Edit labels for this show, save them in this browser, or download a JSON override for the production folder.
Human Transcript
Blocks are grouped by speaker for readability. Expand a block to inspect word-level timing.
Speaker
I've got sunshine on a cloudy day When it's cold outside, I've got the month of
Speaker
November the 24th, 2023, the day after Thanksgiving. And of course, if it's Friday, it must be that was the week. We haven't done it a couple of weeks and Keith is back. I was in New Zealand, so we missed it last week. Not much has been happening in tech over the last couple of weeks. Keith has got any news? Yeah, interesting. So, by the way, I did it last week. I did this video called a walk in the park without me. It's always a walk. I took this camera and put it in front of me and walked in the park and spoke to the thing. Oh, wow. So, I didn't need me. You've made me redundant. You've done an AI on me. Well, I think there's nothing that can replace the dynamics of our conversation, Andrew. Yeah. So, the front cover this week says a tale of two weeks. Yeah. And you've managed to make it an AI cover with some more terrible AI art, which you love so much. A tale of two weeks. First week, Sam Altman out. Second week, he's in. Out, in red, green, in. And, by the way, at my Thanksgiving dinner yesterday, it was pointed out to me that it's only been one week. So, it feels like a year. It feels like a long time, but it was only a week. So, yeah. So, to recap, the board of OpenAI, which is a not-for-profit holding company sitting on top of the whole OpenAI structure, fired Altman, demoted his partner, Brockman. That led to Brockman resigning. That then led to a whole bunch of other people resigning. That was over the weekend. And by Monday morning, out of 770 employees, 700 had signed up to join Microsoft, along with Salmon and Brockman, leaving OpenAI as a shell. Satya from Microsoft clearly was very worried that when the stock market's open on Monday, his share price would crash. And that was a kind of a mission that very successfully supported the share price. It actually went up. And also left the door open to Altman staying. And over this week, that played itself out. And by Wednesday, the board had changed. Two people left the board. Some new people joined, including Larry Summers. Altman was reinstated, along with Brockman. And it's back to normal, in quotes. But, of course, it isn't. It's a whole new world. But that's the sequence of events. An alternative, not that I'm making editorial suggestions, Keith, for that was the week. But an alternative title could have been A Tale of Two Companies. Because what strikes me about OpenAI is it was, maybe not so much now, it was simultaneously two companies within one company. It's a weird arrangement. Can you explain that? So, well, the history is that OpenAI was created as a not-for-profit by Elon Musk and his co-founders. And those co-founders included Altman. Yes. And Brockman. And I think Ilya. Yeah, Ilya, who worked for Geoff Hinton in Toronto, was one of the figures behind LLMs. I mean, reinventing AI in the early teens. And then, at some point, they realized they needed a lot of money. Can I just jump in here, Keith? One of the things that's interesting to me is why they created the nonprofit. I mean, the obvious conclusion is, well, they were worried about AI, and they wanted to donate it to the public, and blah, blah, blah. But I've read some other stuff. I'm not sure if it's any of the pieces you suggested. You've got some great pieces, which we'll talk about. But one of the things I read suggested that one of the reasons they created the nonprofit in the first place was for tax reasons, because it allowed them to park large amounts of money into this company without any tax implications. Do you think there's any truth to that? I don't think so. I think it was a different... It was driven by Elon Musk's fear that a purely commercial incentive for AI would lead to uncontrolled AI outcomes. So, at the very core of it is the idea that capitalism and AI can't really work together. Yeah, but you're crediting Elon Musk with perhaps a little bit more profundity than he's capable of. Firstly, this wasn't his company, and it's not his story. This is a story about Altman rather than Musk. So, are you suggesting that Musk was really the guy behind OpenAI? Only at the beginning. He abandoned it when the for-profit was formed, and Microsoft allocated $10 billion worth of computing power. And that took the form of an investment into a subsidiary that was still governed by the not-for-profit, but it had commercial intent. Right, okay. So, let's leave Musk, because Musk is... I don't see Musk as central as perhaps you do. So, it began as a non-profit, co-founded by Musk and Altman and the guy, Ilya... Who was the fourth person? Greg Brockman. Greg Brockman. It had some heavyweight initial board members, including Reid Hoffman, who seems ubiquitous. And what did it raise? Like $500 million or a billion? I don't remember what the initial raise was. A large amount of money. I think it was $850 million. But it was all their own money. They all put their own money in. Right, but they had a lot of money. Which, for better or worse, didn't get taxed. They could park some of their profits from other tech initiatives into this thing. So, it was always intriguing. No one quite understood. It was always a bit murky, even from the beginning, wasn't it? Yeah. Vinod Khosla, by the way, invested very early as well. I don't know if it was murky. There was a lot of unknown. You know, at that time, I don't think any of us really believed AI was close to being realistic. So, it felt like a highly philosophical conversation between people thinking about the future. It was 2015. Yeah. So, it was a bit abstract and I don't think anyone paid that much attention. And let's just be clear about Altman was certainly one of the core drivers. One of the interesting things about all this is it's brought out Altman's intriguing relationship with Y Combinator, which I hadn't known about at the time. When did he get fired at Y Combinator? I don't think we know that he got fired. Well, I think Paul Graham's made it clear he got fired in his own way. Your friend, Paul Graham. Yeah. I mean, if he took over from the UK, you'd do that to fire somebody. Yeah. So, Jeff Ralston took over at YC. I can't remember what year that was. I think it was about 2050. It was about the same time. Yeah. And then Sam Altman became the CEO and the driver in the mid-teens of OpenAI. Yeah. And I don't think anyone really knows the story of the YC exit, to be honest. Well, I think a lot of the stuff that's come out made it clear. I mean, Paul Graham's put it quite clearly. He's a difficult guy to work with because he's so brilliant and so selfish or self-focused. He's Jobsian in that sense. A genius who pisses people off. Yeah. It's interesting. I don't have the gene that finds that objectionable. Well, I'm not suggesting it is. I'm just stating a point. But he is who he is. He's clearly driven. Yeah. I mean, he's a classic entrepreneur. He's self-interested, impatient. Yeah. And I think this week he's shown himself to be quite calm, by the way. Well, let's leave that. So, let's go back to the story because this is the important thing. Otherwise, it gets a bit silly. So, what happened to allow, within this nonprofit, when did they establish this for-profit sideline or parallel company? Why didn't they just start another company? Just do a regular startup? Well, that's super hard. All the IP would be inside OpenAI and it would be almost impossible. So, I think they just tried to figure out how to pay for the costs of the development, which are in the billions. Well, you're an entrepreneur, Keith. By the way, you should know. Sorry, go on. Just before you do, just a factual bit. When Microsoft invested, along with others, they put a cap on the gains at 100 times the investment. So, if you put in a billion, you couldn't make more than 100 billion. So, there's kind of a share buyback clause. Well, that's still a pretty good deal. You put a billion in, you get 100 billion. It's 100x. Yeah, but if the company ends up being worth trillions, it's not coming to you. And if it works, it will be worth trillions. So, there's kind of an unusual part even of the for-profit. But coming back, if you're an entrepreneur, you're also an investor, would you put money into a company that began as a non-profit and clearly had second thoughts and wanted to create a kind of parallel or caveat for-profit company within this non-profit? I still don't really understand how they got away with it, really. I think it's one of those things. I mean, the journey of a company is quite specific to the people involved and who they engage with. And there's problems to solve based on pre-existing structures. And, you know, minds come together and figure out solutions. So, I think it's honestly quite organic. I don't think it will… Yeah, in other words, you're not answering me and they didn't answer because I don't think this is a question you can answer. And that is why they got into such trouble. Yeah, they built a contradiction into the very heart. Right, they built a contradiction. I like that. You're an expert in historical contradictions, Keith. Is there something Hegelian about this? Well, there's certainly a thesis, an antithesis, and a synthesis, which is the Hegelian references. So, the contradiction is, on the one hand, it was a non-profit designed to save the world from AI or at least protect the world from the dangers of AI. On the other hand, it was a for-profit steaming ahead with generative AI. So, when did the Microsoft investment happen? I don't remember the year, to be honest, but it was about three years ago. But one of the other things that's not clear, is this real money or are these vouchers which require AI to use Microsoft services? So, it's not real cash. It's a little bit of both, but it's mostly tokenistic. So, it's not… By the way, Microsoft are well-practiced at this. When I did a deal with them in 1999 at Real Names, they gave me control of the browser address bar, Internet Explorer, and in return, they got 20% of my company. So, it's a standard thing. Yeah, it is. Now, the catalyst, however, we haven't really got to the crux of it yet. This contradiction at the heart wasn't really the cause because that was peaceful coexistence, really. The cause, apparently, is an internal project known as Q-Star that Ilya was in charge of. Apparently. Where is your evidence for that? Because there have been a lot of rumors on… There was a Reuters piece yesterday that seems to have a lot of backing that basically makes the point that the most recent internal tests of this Q-Star project, ChatGPT has learned math. There's a lot of detail into how, but basically… Yeah, I read it, actually. Now, by learning math, of course, it can calculate. And once you can calculate, computers become available to you to do all kinds of things. And apparently, Ilya got super concerned about the implications. Apparently, it's elementary level math. It's not super math. I mean, leaving aside that, which probably is important, there were a lot of tensions between Altman and the board. Let's just remind ourselves of what the board looked like because that's the interesting thing to me. And I think it astonished a lot of people that a company that was valued at $90 or $100 billion had five people on the board. Most of the heavy hitters had left. Musk left. Hoffman left to do his own AI startup. So, there was Altman and Brockman. And who were the other three or four? Ilya? Ilya was one, and then two women. Holly, and I'm blanking on the name of the remainder. Toner, Helen Toner. Yeah, but they were all focused… And Adam D'Angelo, the ex-Facebook guy. Yeah, who Altman spent Thanksgiving with yesterday and apparently is friendly with. So… Even if they were on opposite sides and Altman was in favor of firing. Yeah. Adam D'Angelo was in favor of firing Altman. Yeah. So, the board's remit combined not-for-profit with safety.
Speaker
And the difference of opinion is all around speed and safety. I don't think it's really about commercialization, although only insofar as commercialization is reducing safety. So, it's really all about safety, which is why the debate is about this grouping called effective altruists, who have been very loud and strong on AI and safety and have made the primary discussion around AI to do with the risks to humanity of developing it. And, of course, Altman is of the opposite point of view. And those are known as the e-accelerators, effective acceleration. And it's led to this new word in the lexicon, which is now being used, decels, short for decelerators. People want to slow everything down. That really is the crux of it. And Ilya catalyzed that because he was afraid that the new discovery was going to lead to speeding it up, but also increasing its capabilities in an unpredictable way. And it all blew up. And I don't think this resolves it, by the way. I think we're… Well, we'll get to the resolution. We are talking with Keith Teer, the CEO of SignalRank and the author of That Was The Week, an essential newsletter about what has happened in tech. And, of course, there's only one story in tech over the last couple of weeks. Keith, you say this is the rival between the accelerators and the decels, but they're both legitimate. They're both articulate. They both have good arguments. The problem is that it coexisted within the same company. There's no reason why you can't have nonprofits designed to slow AI down. And, of course, there are always going to be for-profits focused on seizing territory. Isn't that the problem is that you can't have an accelerator company which is simultaneously a decel company controlled by a decel board? Well, I think if you historicize that a little bit, at the time of the Enlightenment, you could say the Catholic Church was legitimate and so was Saint-Simon, the believer in positivism. So there's always different points of view. But they were different. Saint-Simon didn't live in Rome and they were both legitimate. The weird thing is that this was taking place simultaneously within the same company, in the same boardroom. I think that's always true in history. The idea that there is uniformity is false. Every board I've ever been on, and I've been on many, there's never been uniformity. Yeah, but if you take Google, for example, which revolutionized the Internet and Web 2.0, the board wasn't divided. It was a for-profit company. I mean, Sergey and Larry differed in terms of business models and they brought in Eric Schmidt, but they never smashed the company. They never smashed the company. But remember, Schmidt used to be on Apple's board and then went to war with Apple over Android. Yeah, but that's different. That's two competing corporate entities.
Speaker
So clearly... I was going to say, I think in Silicon Valley at the moment, Andrew, this was true yesterday at my Thanksgiving dinner with friends, you're going to find strongly held views on what appears to be a very strict divide between people who believe in unrestrained innovation and do not welcome regulation and those who take the opposite view. That is the moment that we're living in, in the Valley. There are basically the equivalent of Luddites and innovators. And, you know, both have strongly held views. Both can defend them. Both claim humanism as their goal. Which is a meaningless word, right? Yeah, a meaningless word. So it is actually very symbolic. It goes way beyond open AI to this moment in the philosophy of technology. Wall Street Journal had a great article yesterday saying that this open AI saga has dealt a mortal blow to what are known as the D-cells or the effective altruism people. And I think the Wall Street Journal is probably calling the end of the war way too early. Yeah, but they don't like the effective altruism. Of course, the term has got horrible marketing because of another Sam, Sam Bankman Freed. Yeah, but it is symbolic of the moment and people are taking sides. Friends are taking sides against each other. And it's become quite dogmatic and ideological. Marc Andreessen is basically a divider in this because he's so strongly spoken on the topic that he's dividing opinion. But again, coming back to my question, the reality is you're going to have both. You're going to have non-profit companies and you're going to have for-profit companies. And that's always been the case. It was the case throughout the history of the internet. You had non-profit browsers and you had for-profit browsers. It's just weird that it would take place within the same company. Well, you're right. OpenAI is kind of novel in that sense. However, both points of view are fighting. So it isn't peaceful coexistence. It's a fight for the heart and soul of the meaning of the word innovation and whether humans should be afraid of it or embrace it. No, it isn't because the decels, they're not innovators. They don't claim to be. They're not interested in innovation. They're interested in something else. Well, what? I don't know. They're interested in whatever you want to call it. You may not like them, but they're regulators or anti-tech people. Yeah. I don't see that. I don't think it's a... Well, in a way, that tells us how relevant our conversations are when we do these on a Friday because Lina Khan, for example... And you should come up. She must be frothing at the bit to control OpenAI. Yeah, well, we've talked about her a lot, but that urge to regulate is a big part of the decel, anti-big tech kind of set of ideas that tries to make us all threatened by technology. And I just can't go there. I mean, to me, it's... I thought you liked Lina Khan. I thought she was right. You're a big fan. So there are two or three questions here to come in. I mean, this seems to have been resolved in the short term. So my first question to you is, has this actually changed anything? I think it's changed the awareness of the issues massively. And I think... Which issue? The issue within OpenAI? No, no. The issues that lived in OpenAI but exist outside of it, which is, you know, what do we think about allowing scientists to innovate free of regulation? Yeah, I don't agree. I mean, that issue was out there last week, the week before, the last year, the year before. I don't see what difference it makes. It's shone a light on it. I mean, people are much more sensitive and aware now. So almost any topic that comes up, and I discussed many of them yesterday with friends, the philosophical roots of your opinions are now being transparently exposed and discussed openly. And that's new. I mean, I'd never heard of IAC until about a month ago. But you? What's IAC? Exactly. It's the effective acceleration, which is the opposite of effective altruism. Or not the opposite, but it's the other side of the argument. Yeah. I have to admit, I don't buy it. I mean, you go to your fancy Palo Alto dinner parties, Thanksgiving. These are all wealthy, powerful people. I just don't think most people... I don't think it's affected anything in that sense. All right. I mean, clearly... So who won in this? Are there any winners? It seems as if everyone's come out a bit tarnished. Sam Altman doesn't look quite as Christ-like as he did before the crisis, does he? I think he played a very smart game. I'd say the winners are Satya, the CEO of Microsoft. Yeah. Altman and the commercial side of OpenAI are winners. I think Vinod Khosla comes out a winner. He was very vocal on Twitter during the whole thing.
Speaker
Probably Andreessen on the fringe. He's not directly a beneficiary, but his point of view is... But I think that the nonprofit board has come out looking pretty good in the sense they stuck to their guns. They had their position. They were willing to destroy the company for their beliefs. And in the end, maybe the... I don't know what changed their minds. Maybe the employees? Well, they were... You remember Ilya apologized? Yeah. And said he regretted instigating the whole thing. He's come out. He doesn't look... He looks a bit of an odd character, doesn't he? Yeah. I think it's going to be interesting to see if they can rebuild those relationships. But basically, in pursuing their beliefs, the board almost destroyed OpenAI. And OpenAI, whatever you think of it, is a massive plus. But that's... And one of the articles you connected with, I thought was excellent. I actually read it before the newsletter. Stratechery. Stratechery. Stratechery, Thompson's piece, which made it clear that for Microsoft, they had all their IP rights to OpenAI. So it wouldn't have even mattered to Microsoft had OpenAI essentially collapsed, and they would have hired all the ex-OpenAI people. So from Microsoft's point of view, probably the collapse of OpenAI wouldn't have been a bad thing. I think it would be bad from a regulatory point of view because they benefit from not owning OpenAI. They own 49% of the commercial side. And if they own the whole, one assumes the regulatory regime would be all over them. So I think they prefer the outcome as it happened, but they were prepared to step in and hire everyone. By the way, Salesforce put out a competitive bid telling every OpenAI engineer that they will match Microsoft's offer to them. OpenAI is about to do a share sale of employee stocks where many, many employees will get several million dollars each, up to 10 million, I think. And Salesforce and Microsoft both offered to match those scenarios. So that tells you when a single engineer is worth 10 million, how much focus there is on this. But the Thompson piece suggests that had OpenAI just folded and Microsoft hired all the ex-engineers, they brought, as was originally announced earlier in the week, they would have brought Altman in to run an AI startup within. They would have essentially got OpenAI for nothing. Well, not for nothing, because they still would have all the costs and the 10 billion, but yeah. But 10 billion isn't cost, it's not cash. So they would have been even off the hook in terms of those vouchers. Yeah, well, they still have to spend it. Whether they spend it with OpenAI or just spend it themselves is still a cost of running the whole thing. And it's just a question of where the money goes. But yeah, they literally could have acquired OpenAI for free, but they didn't really want to. There's advantages. From a PR point of view, do you think? From a future regulatory point of view. Microsoft is, you know. Yeah, the lawyers determine. A lot. And did Altman come out a winner? I mean, as I said, his reputation seems a little tarnished. There was the stuff about getting fired from Y Combinator and all these arguments. He's clearly a very focused, self-ish guy, for better or worse. I mean, he's an innovator. Do you think he's come out of it looking better? Well, look, all innovators get tarnished. The more independent they are, the more they get tarnished. Look at Elon, for example. So the people who don't like innovators, who are self-motivated, are always going to dislike anyone like that. So I don't think he's lost friends. I just think he's reinforced enemies in believing that they're right to be enemies. But I don't think he's lost any friends. And I actually think, for me, his reputation went up because he played a very calm game in what must have been a highly emotional few days. He didn't overreact. He talked to all the right people, and in the end, got the right outcome. He's very smart. But that was both the compliment and the knock on him that Graham said you drop him. I think his quote was brilliant, actually. You drop him into a community of cannibals, and then a month later, he comes out as the king. I mean, he's clearly a brilliant operator, for better or worse. And I guess the board just didn't trust him. And the board was clearly out of its league with a guy like that. You know, the board implied that they were blindsided on the technical capabilities when they said he's less than candid. I think it's Ilya saying, look, we've made this breakthrough in Q-Star, and the board was unaware of it. And given that the board's governance remit includes safety, and Ilya was saying this may not be safe, both of which I disagree with, but still, the board really had no choice. I think the board, you know, from its own point of view, did the right thing. Yeah, I think the board comes out looking whether you approve of what they did. Heroic, they stuck to their guns. D'Angelo stuck to his guns. He's clearly a survivor. I mean, women haven't come out of it very well. The two women on the board got fired. The new people on the board, Larry Summers and Taylor, are obviously men. Should we expect women on the board at some point here? It doesn't make Silicon Valley look very good from a female point of view. I hate to say it, but I don't have a strong point of view about that. In other words, you don't care. You know, if it was all women, I wouldn't comment on it, and if it's all men, I don't comment on it. I just want to know, are they any good? And Larry Summers is an odd choice to me. Especially since he got fired from Harvard for some transgression with women. Oh, did he? I didn't know that. And Brett Taylor's different. I mean, he's a very, very talented Silicon Valley exec. He did friend feed back in the day. He went to South Bay. He's a grown-up. Larry Summers, obviously, is also a grown-up, but his expertise... So I don't really see the board as the center of gravity. I think that's seat-filling. I think Altman now is the center of gravity. That's what shifted. But he's not on the board now. It doesn't matter because the board has been rendered powerless by all these decisions. Well, yes and no. I mean, they replaced the two board members. I mean, Larry Summers is not a pushover. Presumably, if Altman continues not to be trusted by the board, he'll get pushed out again. I doubt it. I think when you have 700 out of 770 employees... Yeah, but they were the ones. That's their only concern. They don't care about Altman one way or the other. Their only concern is to cash out. No, no. I think they all could have taken very lucrative jobs at Microsoft. Yeah, but they lost their billion-dollar payout if the company would have fallen. No, they wouldn't. Microsoft offered to manage them. Yeah. So I think this is a board that's been captured by the employees. Maybe. So has this changed anything? I mean, where are we two weeks later? We're back where we were before, except with a stronger management team. Yeah. And how does this impact OpenAI's competitors? There's Google, of course, and then Anthropic and some of the other startups. I think they should all be quite sad. I mean, OpenAI is very, very far ahead technically. The others are trying to catch up. I think mid-journey is more than caught up on imagery. But when you look at the whole portfolio of what OpenAI has in large language models, apparently now in math, in music, in imagery, it's very far ahead. And I think the hope that Google would have had, or others, for that matter, that OpenAI would collapse didn't come true. So I think it's not a good day for everyone else. And one of the other, there's so many pieces to this narrative. One of the other interesting pieces were all the startups dependent on the OpenAI API, whatever it is, in terms of developing their own products. So presumably that's, have some of them left and chosen other partners? They all started looking into alternative options. But honestly, there are none. There are no good alternative options. So I think Facebook's LLAMA, which is open source, although it's- No, Google's LLAMA, isn't it? Lambda. Lambda, yeah, sorry, Lambda. Facebook's one is the place most of them turn to, but it has a lot of restrictions of use, and you end up paying Facebook if you use it over above a certain amount. So there was no good alternative technically either. The other ones are not as good as ChatGPT. So I think it's back to normal there now. And what about Microsoft? There was an interesting piece by John Thornhill in the FT this morning about the exposure to the turmoil exposing Microsoft's investment and dependence on this LLM. Now there's a new thing called SLMs, small language models, which Microsoft is better equipped. Are we shifting from an LLM world to an SLM world? They coexist. Small language models are basically focused on a corpus of data that is more discreet to a topic. So they coexist. They're not separate to each other. Normally in tech, you get this trend called unbundling when something like Craigslist that sells everything, little by little you get sites that sell cars, other sites that rent homes, other sites that do whatever, other parts of it. And eventually Craigslist is a shell of its former self because people go to these specialist sites. Well, that probably is what will happen over the long run with AI. There'll be specialisms and small language models are a better technology for that. Cheaper, faster, more focused, but do less. AGI, which is the goal, the end goal of basically artificial intelligence that is general to everything, that remains the goal of most researchers. And if you have AGI, by definition, you do not get unbundling. And one of the interesting things about the Microsoft, the original Microsoft deal with open AI was that if, and the statutory, however you pronounce it, that piece reveals this, but had open AI developed AGI, Microsoft wouldn't have had the rights to it. So I don't know how you determine whether or not you get to AGI. Yes, it's an amorphous concept, AGI. I mean, at some point, the large language models are good enough to replace many, many human tasks. And AGI is usually defined as computers are better than humans at everything.
Speaker
We still have a bit of a way to go on that. So I've recommended this Thompson piece. It's good. Open AI's Misalignment and Microsoft's Gain. I thought it was brilliant, although maybe it's slightly dated because it came out last Monday. Is there one piece, you've got a number of pieces you recommend, is there one piece that you think captures everything? So on the one hand, it seems very simple. On the other hand, it's incredibly complicated. What's the best writing on it this week? I think the first essay of the week, which says the AI industry turns against its favorite philosophy, which is... That was semaphore. It's been semaphore. I think that is the most prescient piece that I find is very well attuned to what's going on in Silicon Valley. Yeah, I didn't know that Jan Tallinn was involved in this. I talked to him many years ago in Estonia, in Tallinn itself, and it doesn't surprise me, actually, that he's part of this. Yeah. So I think that's probably the best one. Chamath Palihipitthaya wrote a nice history of Open AI, by the way, if any of you... Yeah, I was looking at that. So there's a lot there. Is there anything else happened in the last couple of weeks, Keith? Open AI sucked all the oxygen out of the room. Well, the Binance CEO has pleaded guilty to money laundering and is set to agree to a $4 billion fine and is no longer allowed to be the CEO of Binance. Is he going to go to jail? So he's not Sam Bankman Freed? It doesn't look like he's going to go to jail. It looks like he's going to just pay a fine, but who knows? He has to appear in court in February in Seattle for the charges to become official and the penalty to become official, and I do believe judges have some ability to not do what has been previously agreed. So I think there is a question mark there. And then Scott Schleifer left Tiger Global, which is the second major exec to leave. You used to talk big about Tiger Global until you conveniently... We don't hear about them anymore. Do they have a future? They do. They definitely do, but different to the past. And then finally, the other news of the week is Musk says that XAI's chatbot Grok will be launched to... I'm a premium subscriber. What does that mean? I think it means you'll get an interface within the Twitter or X client that lets you have a chatbot. Does that have any value? Is Musk still a player in this space, do you think? It remains to be seen. The only thing we really know about Grok is that it's been built with a sense of humour that involves irony and sarcasm. Which he doesn't contain. That's one thing he's missing. That's probably why he built it in, so he can... It's terribly named, Grok. Grok. Well, it's quite a good word, Grok. Do you Grok it? But it was a wonderful real-time drama, Keith, wasn't it? Yeah. Everyone was glued to their X and their news updates. Things were changing on an hour-by-hour basis. Exactly right. And the script, I'm sure, is being written now for the Hollywood movie. Yeah. So, obvious, finally, we end with our two favourite features. Start-up of the week. What is the start-up of the week, Keith? The new OpenAI, of course. Om Malik wrote this piece, but I decided to drop it into start-up of the week because where else could you look for a start-up of the week? OpenAI is now a new company. I suspect, unencumbered by its now-neutered board, free to innovate. That's my guess. But has the whole non-profit piece of OpenAI, has that just gone away? Is it now just a standard for-profit start-up? Structurally, it remains in place, and so it could come back to bite them. But practically speaking, it's no longer the driving force. When you say it could come back to bite them, what do you mean? Well, if the board suddenly believed it should try to stop the commercial side again, the fight would reoccur. And if it did reoccur, I think Altman would probably be out, wouldn't he? What do you make, also, of the other thing we missed? There's so many pieces to it. Is that they appointed a post-Altman CEO who lasted about 24 hours. Well, they appointed two. First, they appointed the CTO, a very impressive woman, as the CEO, and then she came out on the side of Altman. So they quickly removed her and replaced her with the former CEO of Twitch. Didn't get great press, and he conveniently withdrew. Eugene, I think he's widely considered in a favorable way, but he is a D-cell. So a D-cell was never going to get power here. And finally, X of the Week. So many tweets, so many Xs about what happened. You've chosen Aaron Levy, the lead magician and CEO at Box. What did he say? The problem we have in AI right now is not that it's getting too powerful, it's that it's not nearly powerful enough. Very little has changed thus far because of AI, and won't until models get faster, cheaper, more accurate, and more intelligent. Building safe AI, and then you've got to click through to see the full lot, building safe AI is insanely important, but any goal that is half in and half out of driving progress overall seems to make little sense. So I put that in because I agree with it. Yeah, I couldn't agree more. It's the goal. I mean, that's what... I mean, it was literally... OpenAI was a unicorn. It was an unnatural company, and now it's a little bit more natural. So finally, Keith, in a year or two or five years, is this just going to be a footnote to a footnote, or is this an important event? I mean, it was dramatic and fun to watch and play along with, but does it really matter, do you think? I think it will not be just a footnote. I think the wind, the intellectual wind, is in the sails of the losers this week. The effective altruism crowd represent somewhat similar to the woke crowd in politics. They represent the growing mood of fear that our times kind of breathe on. And so I think, actually, this is a... They lost a battle, but they're not going away.
Speaker
Come on! I've got sunshine On a cloudy day When it's cold outside I've got the month of May Everybody say I guess you'd say What can make me feel this way Is my girl I'm talking about my girl