Transcript Viewer

That Was The Week 2023 #44

Nov 19, 2023 ยท 2023 #40. Read the transcript grouped by speaker, inspect word-level timecodes, and optionally turn subtitles on for direct video playback

Speaker Labels

Name the speakers

Edit labels for this show, save them in this browser, or download a JSON override for the production folder.

Human Transcript

Timed transcript

Blocks are grouped by speaker for readability. Expand a block to inspect word-level timing.

Speaker

So, I'm going to have you accompany me on my walk in the park with all my neighbours on their bikes and walking their dogs.

Words and timings
SoI'mgoingtohaveyouaccompanymeonmywalkintheparkwithallmyneighboursontheirbikesandwalkingtheirdogs

Speaker

This is Baron Park, that's the name of the neighbourhood and we're on our way to Ball Park. This is the actual name of the park.

Words and timings
ThisisBaronParkthat'sthenameoftheneighbourhoodandwe'reonourwaytoBallParkThisistheactualnameofthepark

Speaker

So, let's turn that camera around and make it track my face, if I can do that. There we go. And let's talk about this week. As I say in the editorial, this is one day late, this newsletter. And it's one day late because as I was considering sending it yesterday, the news broke that Sam Altman had been fired by OpenAI and then was quickly followed by additional news that Greg Brockman had, having been demoted from Chairman of the Board, had decided to resign. And now overnight, at least three additional senior team members at OpenAI have resigned as well.

Words and timings
Solet'sturnthatcameraaroundandmakeittrackmyfaceifIcandothatTherewegoAndlet'stalkaboutthisweekAsIsayintheeditorialthisisonedaylatethisnewsletterAndit'sonedaylatebecauseasIwasconsideringsendingityesterdaythenewsbrokethatSamAltmanhadbeenfiredbyOpenAIandthenwasquicklyfollowedbyadditionalnewsthatBrockmanhadhavingbeendemotedfromChairmanoftheBoardhaddecidedtoresignresignAndnowovernightatleastthreeadditionalseniorteammembersatOpenAIhaveresignedaswell

Speaker

And that, you know, that kind of shaped this week's editorial, which I call the OpenAI debacle and subtitled it EAC vs EF. So, what is going on at OpenAI is not that hard to understand. Basically, there's a pretty deep philosophical difference between the tech lead, Ilya, I won't try to pronounce Ilya's second name, but it is in the editorial. Ilya, on the one hand, and the board of the original not-for-profit part of OpenAI, which is the parent of the whole organization, that board and Ilya seem to have orchestrated a board meeting that ousted Altman and had the consequences of the others leaving. Ron Conway calls it a board coup, and it does appear to be an appropriate designation. So what is EF and what is EAC? These are these are two broad philosophical attitudes to technology that, in Silicon Valley at least, there's a big debate between advocates of both points of view. EF, you'll be familiar with because of the Sam Bankman Freed trial. It stands for, it stands for efficient altruism, effective altruism, sorry, EA, not EF, effective altruism. And EAC stands for effective acceleration. And what what these designations really relate to is effective altruism places doing no harm to humans and, if possible, doing good ahead of all other criteria. And it kind of coincides with the view that technology is dangerous. That's kind of a sub part or at least a possible sub part of effective altruism. And on the other hand, effective acceleration is a point of view that nothing should stand in the way of technical innovation because technical innovation itself delivers human benefits. And that if you try to regulate it or put friction in the way and slow it down, that will have a negative impact on humanity. So both groups would claim to be humanistic.

Words and timings
thatyouknowthatkindofshapedthisweek'seditorialwhichIcalltheOpenAIdebacleandsubtitleditEACvsEFwhatisgoingonatOpenAIisnotthathardtounderstandBasicallythere'saprettydeepphilosophicaldifferencebetweenthetechleadIlyaIwon'ttrytopronounceIlya'ssecondnamebutitisintheeditorialIlyaontheonehandandtheboardoftheoriginalnotforprofitpartofOpenAIwhichistheparentofthewholeorganizationthatboardandIlyaseemtohaveorchestratedaboardboardmeetingthatoustedAltmanandhadtheconsequencesoftheothersleavingRonConwaycallsitaboardcoupanditdoesappeartobeanappropriatedesignationSowhatisEFandwhatisEACThesearethesearetwobroadphilosophicalattitudestotechnologythatinSiliconValleyatleastthere'sabigdebatebetweenadvocatesofbothpointsofviewyou'llbefamiliarwithbecauseoftheSamBankmanFreedtrialItstandsforitstandsforefficientaltruismeffectivealtruismsorryEAnotEFeffectivealtruismAndEACstandsforeffectiveaccelerationwhatwhatthesedesignationsreallyrelatetoiseffectivealtruismplacesdoingnoharmtohumansandifpossibledoinggoodaheadofallothercriteriaAnditkindofcoincideswiththeviewthattechnologyisdangerousThat'skindofasubpartoratleastapossiblesubpartofeffectivealtruismAndontheotherhandeffectiveaccelerationisapointofviewthatnothingshouldstandinthewayoftechnicalinnovationbecausetechnicalinnovationitselfdelivershumanbenefitsAndthatifyoutrytoregulateitorputfrictioninthewayandslowitdownthatwillhaveanegativeimpactonhumanitySobothgroupswouldclaimtobehumanistic

Speaker

And, you know, if you take a superficial view, you could find things to agree with with both. But when put into the real world, effective altruism, it turns out. Is. A disguised form of being a Luddite, basically, it's basically saying slow down or stop innovation until we know it's safe. And it coincides with the whole idea that AI is dangerous or at least potentially dangerous. So back to the the firing of some Altman, Altman really represents the IAC point of view, whether explicitly or implicitly, he believes that innovation as a driver all by itself is super important for the future of the human race. He doesn't ignore the the downside. He talks often, for example, about jobs being threatened and he has his world coin project that focuses on how to address that. But he's a pro innovation guy, as are most of my friends in Silicon Valley. He also believes that the profit motive is a key driver to incentivize innovation. So even though OpenAI began as a not for profit entity focused on open source software, he has evolved it into a for profit entity with large amounts of revenue, which, of course, are required to even run the system as opposed to grants or benefit that are donated or other philanthropical methods of funding it. So it's basically pro innovation and pro profit motive as a driver. Ilya represents the opposite sensibilities, and I don't think either of these two are binary. There's probably quite a bit of overlap between the two of them, but insofar as Ilya is different, he believes that the not for profit core of the philosophy, the pro human core of the philosophy cannot properly be executed in a for profit organization. And he certainly felt that last week when the developer day happened and the App Store was announced for OpenAI and it was clear that large numbers of people signed up for $20 a month chat GPT plus as a result because they wanted to play with the new features, especially the customized GPTs that you can build for yourself. That this this led to an immediate recognition that OpenAI was going to become huge and generate lots of revenue and that that accelerated something that was already happening, which was intense internal debate at OpenAI with Ilya orchestrating an internal coup. It's basically what's happened. So if you want to decide which side to come down on, I think my editorial this week will help. There's lots of references in there, everything referenced. You can click on it and go and read the originals. But it's basically comes down to, do you believe in innovation and the profit motive as a driver of it? And you don't really go along with this whole fear of technology, fear of big tech, as it were. And you don't really agree with government regulation as a good thing at this stage of the technology. Then you would side with Altman. On the other hand, if you if you are fearful of big tech and equally fearful of the profit motive and equally fearful of AI, you would probably side with the board and with Ilya. Now, the board is described by Kyle Harrison in his essay that is in this week's newsletter as a bunch of randos who have way too much power. And it is worth noting that this is not a board that you would expect to see in most of the Comvalley companies. It's also worth saying that in taking the decision they did, if you take a look at the organization chart of OpenAI in the editorial, you'll see that they fired Altman from a wholly-owned subsidiary, which is the profitable part, the for-profit part of an OpenAI. And inside that part, there are investors. Microsoft, for example, apparently owns 49 percent. Vinod Khosla, who's referenced in the editorial, is an investor and other venture capitalists as well. Apparently, these people were not notified of what was about to happen and hundreds of billions of dollars of their investments are now put at risk due to decisions that were outside their control and not even within their knowledge. So I would anticipate that this board of directors has acted in a way that will certainly trigger many investors with some kind of a reaction. I don't know what that will be, but there'll be some kind of a reaction. So that is this week. As I said, there's no video with Andrew this week because he's traveling. So this walk in the park is the alternative. Hope you enjoy it and let me know what you think in the comments. See you next week.

Words and timings
AndyouknowifyoutakeasuperficialviewyoucouldfindthingstoagreewithwithbothButwhenputintotherealworldeffectivealtruismitturnsoutIsAdisguisedformofbeingaLudditebasicallyit'sbasicallysayingslowdownorstopinnovationuntilweknowit'ssafeAnditcoincideswiththewholeideathatAIisdangerousoratleastpotentiallydangerousSobacktothethefiringofsomeAltmanAltmanreallyrepresentstheIACpointofviewwhetherexplicitlyorimplicitlyhebelievesthatinnovationasadriverallbyitselfissuperimportantforthefutureofthehumanraceHedoesn'tignorethethedownsideHetalksoftenforexampleaboutjobsbeingthreatenedandhehashisworldcoinprojectthatfocusesonhowtoaddressthatButhe'saproinnovationguyasaremostofmyfriendsinSiliconValleyHealsobelievesthattheprofitmotiveisakeydrivertoincentivizeinnovationSoeventhoughOpenAIbeganasanotforprofitentityfocusedonopensourcesoftwarehehasevolveditintoaforprofitentitywithlargeamountsofrevenuewhichofcoursearerequiredtoevenrunthesystemasopposedtograntsorbenefitthataredonatedorotherphilanthropicalmethodsoffundingitit'sbasicallyproinnovationandproprofitmotiveasadriverIlyarepresentstheoppositesensibilitiesandIdon'tthinkeitherofthesetwobinaryThere'sprobablyquiteabitofoverlapbetweenthetwoofthembutinsofarasIlyaIlyaisdifferenthebelievesthatthenotforprofitcoreofthephilosophythehumancoreofthephilosophycannotproperlybeexecutedinaforprofitorganizationAndhecertainlyfeltthatlastweekwhenthedeveloperdayhappenedandtheAppStoreStorewasannouncedforOpenAIanditwasclearthatlargenumbersofpeoplesignedup20amonthchatGPTplusasaresultbecausetheywantedtoplaywiththenewfeaturesespeciallythecustomizedGPTsthatyoucanbuildforyourselfThatthisthisledtoanimmediaterecognitionthatOpenAIwasgoingtobecomehugeandgeneratelotsofrevenueandthatthatacceleratedsomethingthatwasalreadyhappeningwhichwasintenseinternaldebateatOpenAIwithIlyaorchestratinganinternalcoupIt'sbasicallywhat'shappenedSoifyouwanttodecidewhichsidetocomedownonIthinkmyeditorialthisweekwillhelpThere'slotsofreferencesinthereeverythingreferencedYoucanclickonitandgoandreadtheoriginalsButit'sbasicallycomesdowntodoyoubelieveininnovationandtheprofitmotiveasadriverdriverofitAndyoudon'treallygoalongwiththiswholefearoftechnologyfearofbigtechasitwereAndyoudon'treallyagreewithgovernmentregulationasagoodthingatthisstageofthetechnologyThenyouwouldsidewithAltmantheotherhandifyouifyouarefearfulofbigtechandequallyfearfuloftheprofitmotiveandequallyfearfulofAIyouwouldprobablysidewiththeboardandwithIlyaNowtheboardisdescribedbyKyleHarrisoninhisessaythatisinthisnewsletterasabunchofrandoswhohavewaytoomuchpowerAnditisworthnotingthatthisisnotaboardthatyouwouldexpecttoseeinmostoftheComvalleycompaniesIt'salsoworthsayingthatintakingthedecisiontheydidifyoutakealookattheorganizationchartofOpenAIintheeditorialyou'llseethattheyfiredAltmanfromawhollyownedsubsidiarywhichistheprofitableparttheforprofitpartofanOpenAIAndinsidethatpartthereareinvestorsMicrosoftforexampleapparentlyowns49percentVinodKhoslawho'sreferencedintheeditorialisaninvestorandotherventurecapitalistsaswellApparentlythesepeoplewerenotnotifiedofwhatwasabouttohappenandhundredsbillionsofdollarsoftheirinvestmentsarenowputatriskduetodecisionsthatwereoutsidetheircontrolandnotevenwithintheirknowledgeSoIwouldanticipatethatthisboardofdirectorshasactedinawaythatwillcertainlytriggermanyinvestorswithsomekindofareactionIdon'tknowwhatthatwillbebutthere'llbesomekindofareactionSothatisthisweekAsIsaidthere'snovideowithAndrewthisweekbecausehe'stravelingSothiswalkintheparkisthealternativeHopeyouenjoyitandletmeknowwhatyouthinkinthecommentsSeeyounextweek

Speaker

When it's cold outside, I got the money. Everybody say, I guess you say. What can make me feel this way? It's my girl, my girl, I'm talking about my girl, my girl. I've got.

Words and timings
Whenit'scoldoutsideIgotthemoneyEverybodysayIguessyousayWhatcanmakemefeelthiswayIt'smygirlmygirlI'mtalkingaboutmygirlmygirlI'vegot