Transcript Viewer

Can OpenAI Shape Our Future?

Oct 31, 2025 ยท 2025 #41. Read the transcript grouped by speaker, inspect word-level timecodes, and optionally turn subtitles on for direct video playback

Speaker Labels

Name the speakers

Edit labels for this show, save them in this browser, or download a JSON override for the production folder.

Transcript Playback

Can OpenAI Shape Our Future?

Human Transcript

Timed transcript

Blocks are grouped by speaker for readability. Expand a block to inspect word-level timing.

Speaker 3

Hello, everybody. Two and a half years ago, I did a show with a woman called Nirit Weissblatt on how the tech clash has gone too far. At that time, she had a book out. It was called The Tech Clash and Tech Crisis Communication. It wasn't a bad book and it wasn't a bad conversation. I never gave it much thought. But this week, Nirit Weisblatt has become a sensation in Silicon Valley, at least according to Keith Teer, who is always a bit of a sensation in Silicon Valley. That was the week...

Words and timings
Hello,everybody.Twoandahalfyearsago,IdidashowwithawomancalledNiritWeissblattonhowthetechclashhasgonetoofar.Atthattime,shehadabookout.ItwascalledTheTechClashandTechCrisisCommunication.Itwasn'tabadbookanditwasn'tabadconversation.Inevergaveitmuchthought.Butthisweek,NiritWeisblatthasbecomeasensationinSiliconValley,atleastaccordingtoKeithTeer,whoisalwaysabitofasensationinSiliconValley.Thatwastheweek...

Speaker 3

This week is built around Nirit Weissblatt, and it's based on what he is calling a manufactured moral panic against AI. Keith, are you still sane?

Words and timings
ThisweekisbuiltaroundNiritWeissblatt,andit'sbasedonwhatheiscallingamanufacturedmoralpanicagainstAI.Keith,areyoustillsane?

Speaker 4

Am I still sane? Well, Andrew, as Pink Floyd said, have I ever been sane? Good question.

Words and timings
AmIstillsane?Well,Andrew,asPinkFloydsaid,haveIeverbeensane?Goodquestion.

Speaker 4

No, you know, I picked up on this because I'd never heard of her until this last...

Words and timings
No,youknow,IpickeduponthisbecauseI'dneverheardofheruntilthislast...

Speaker 3

I thought you always told me you watched all my shows. You didn't watch that show May 31st, 2022? You must have missed that one. You must have been on holiday or something.

Words and timings
Ithoughtyoualwaystoldmeyouwatchedallmyshows.Youdidn'twatchthatshowMay31st,2022?Youmusthavemissedthatone.Youmusthavebeenonholidayorsomething.

Speaker 4

Must have been. All my memory doesn't work very well anymore. But that said, I hadn't heard of her as far as I knew. And her name came up on a bunch of podcasts that I watched, namely the All In podcast.

Words and timings
Musthavebeen.Allmymemorydoesn'tworkverywellanymore.Butthatsaid,Ihadn'theardofherasfarasIknew.AndhernamecameuponabunchofpodcaststhatIwatched,namelytheAllInpodcast.

Speaker 3

Yeah, which is smaller than ours, of course, but it's still a decent podcast.

Words and timings
Yeah,whichissmallerthanours,ofcourse,butit'sstilladecentpodcast.

Speaker 4

It's the second biggest podcast in the world, after all.

Words and timings
It'sthesecondbiggestpodcastintheworld,afterall.

Speaker 3

It's astonishing. Jason Calacanis, we used to know him when nobody knew him. Now everybody knows him. Exactly right.

Words and timings
It'sastonishing.JasonCalacanis,weusedtoknowhimwhennobodyknewhim.Noweverybodyknowshim.Exactlyright.

Speaker 4

But they mentioned her. David Sachs, in particular, mentioned her. So I started to...

Words and timings
Buttheymentionedher.DavidSachs,inparticular,mentionedher.SoIstartedto...

Speaker 3

Sachs, of course, is a Trump appointee, a right-wing South African who works now as, what's Trump's AI man?

Words and timings
Sachs,ofcourse,isaTrumpappointee,aright-wingSouthAfricanwhoworksnowas,what'sTrump'sAIman?

Speaker 4

Yeah. I don't think of him as right-wing, but he's definitely a Trump person.

Words and timings
Yeah.Idon'tthinkofhimasright-wing,buthe'sdefinitelyaTrumpperson.

Speaker 3

He's more of a... Well, he's not exactly left-wing, is he? Let's face it. Especially if he's working for Donald Trump.

Words and timings
He'smoreofa...Well,he'snotexactlyleft-wing,ishe?Let'sfaceit.Especiallyifhe'sworkingforDonaldTrump.

Speaker 4

He's a libertarian technocrat, is what he is. I don't know if that has a left or right label. It could be either one, depending on the topic. But he is right-wing on some things, that's for sure. Anyway, that's not what we're here to talk about. He mentioned her, so I went and drilled down, and she has this prolific writing. I've actually featured, I think, five of her pieces this week, because I think people should go and read them.

Words and timings
He'salibertariantechnocrat,iswhatheis.Idon'tknowifthathasaleftorrightlabel.Itcouldbeeitherone,dependingonthetopic.Butheisright-wingonsomethings,that'sforsure.Anyway,that'snotwhatwe'reheretotalkabout.Hementionedher,soIwentanddrilleddown,andshehasthisprolificwriting.I'veactuallyfeatured,Ithink,fiveofherpiecesthisweek,becauseIthinkpeopleshouldgoandreadthem.

Speaker 3

I mean, she's made you all panicky, Keith. I've never seen so many articles by the same person. You've got the AI Panic Campaign, part one, part two. Those are the only two I actually include on my slides. But you're saying there are five parts to all this?

Words and timings
Imean,she'smadeyouallpanicky,Keith.I'veneverseensomanyarticlesbythesameperson.You'vegottheAIPanicCampaign,partone,parttwo.ThosearetheonlytwoIactuallyincludeonmyslides.Butyou'resayingtherearefivepartstoallthis?

Speaker 4

Yeah, there's five. The first one is called What Ilya Sutskiva Really Wants. And then in the media section in the newsletter, there's What's Wrong with AI Media Coverage and How to Fix It. And also Your Guide to the Top 10 AI Media Frames. Frame meaning you were framed. So there's quite a bit. And at first I thought she was just a conspiracy theorist and I read the stuff and I think she's accurately describing something real that's happening. I don't think it's successful and therefore non-impactful and therefore ultimately it doesn't matter. But she's pointing the finger at Dustin Moskovich, who was one of the early Facebook winners. I think he's one of the ones that got into a lawsuit with Zuckerberg back in the day. Yeah, he was a co-founder of Facebook. Yeah, he apparently, he's a big, you know, efficient altruism person, and he's a billionaire, and he's underwriting a lot of the propaganda saying that AI is dangerous and needs to be regulated.

Words and timings
Yeah,there'sfive.ThefirstoneiscalledWhatIlyaSutskivaReallyWants.Andtheninthemediasectioninthenewsletter,there'sWhat'sWrongwithAIMediaCoverageandHowtoFixIt.AndalsoYourGuidetotheTop10AIMediaFrames.Framemeaningyouwereframed.Sothere'squiteabit.AndatfirstIthoughtshewasjustaconspiracytheoristandIreadthestuffandIthinkshe'saccuratelydescribingsomethingrealthat'shappening.Idon'tthinkit'ssuccessfulandthereforenon-impactfulandthereforeultimatelyitdoesn'tmatter.Butshe'spointingthefingeratDustinMoskovich,whowasoneoftheearlyFacebookwinners.Ithinkhe'soneoftheonesthatgotintoalawsuitwithZuckerbergbackintheday.Yeah,hewasaco-founderofFacebook.Yeah,heapparently,he'sabig,youknow,efficientaltruismperson,andhe'sabillionaire,andhe'sunderwritingalotofthepropagandasayingthatAIisdangerousandneedstoberegulated.

Speaker 3

Well, but hold on, wait a minute. So I...

Words and timings
Well,butholdon,waitaminute.SoI...

Speaker 3

You're not acknowledging that there's a legitimate argument again, and not against AI, but there's a legitimate argument about some of the impact of AI and suggesting that it's all some conspiracy by Dustin Moskowitz. Look, I think they're probably genuine people who really believe what they're saying. Dustin clearly does. Who's he working for? Israel? Putin? I mean, this is all bizarre.

Words and timings
You'renotacknowledgingthatthere'salegitimateargumentagain,andnotagainstAI,butthere'salegitimateargumentaboutsomeoftheimpactofAIandsuggestingthatit'sallsomeconspiracybyDustinMoskowitz.Look,Ithinkthey'reprobablygenuinepeoplewhoreallybelievewhatthey'resaying.Dustinclearlydoes.Who'sheworkingfor?Israel?Putin?Imean,thisisallbizarre.

Speaker 4

I don't think there's any suggestion he's working for anybody other than himself. It's his belief system. He could be shorting stocks. I have no idea what he's doing. But she definitely shows that there is an organized... group with talking points that they discussed beforehand with an attempt to then proliferate that through various media and campaigns.

Words and timings
Idon'tthinkthere'sanysuggestionhe'sworkingforanybodyotherthanhimself.It'shisbeliefsystem.Hecouldbeshortingstocks.Ihavenoideawhathe'sdoing.Butshedefinitelyshowsthatthereisanorganized...groupwithtalkingpointsthattheydiscussedbeforehandwithanattempttothenproliferatethatthroughvariousmediaandcampaigns.

Speaker 3

Okay, so this is all Keith. This is not very Keith Teer. You've always been somewhat of a skeptic. Who are these people apart from Dustin Moskowitz?

Words and timings
Okay,sothisisallKeith.ThisisnotveryKeithTeer.You'vealwaysbeensomewhatofaskeptic.WhoarethesepeopleapartfromDustinMoskowitz?

Speaker 4

There's quite a lot of them. I mean, you and I had a show about a year ago about effective altruism versus Effective acceleration.

Words and timings
There'squitealotofthem.Imean,youandIhadashowaboutayearagoabouteffectivealtruismversusEffectiveacceleration.

Speaker 3

He's in jail. He can't be controlling this thing. What about Sam Backman-Fried's parents? Are they involved somehow?

Words and timings
He'sinjail.Hecan'tbecontrollingthisthing.WhataboutSamBackman-Fried'sparents?Aretheyinvolvedsomehow?

Speaker 4

You're being facetious, Andrew. Let's talk about it properly. Let's talk about it properly. I'm trying to keep a straight face, but this is bizarre. Well, you do acknowledge there is a thing called effective altruism. And it's a group of people.

Words and timings
You'rebeingfacetious,Andrew.Let'stalkaboutitproperly.Let'stalkaboutitproperly.I'mtryingtokeepastraightface,butthisisbizarre.Well,youdoacknowledgethereisathingcalledeffectivealtruism.Andit'sagroupofpeople.

Speaker 3

A movement, yeah. And I've done a lot of shows. Toby Ord, I know his name has come up this week. Jan Tallinn. Jan Tallinn, I've actually, who was a co-founder of Skype, a very smart and interesting guy who's concerned. I mean, all these people have, whether you agree with them or not, they have some concerns about the impact of tech on the world. And Tallinn was an investor in a think tank in Cambridge, very much... a successful tech entrepreneur.

Words and timings
Amovement,yeah.AndI'vedonealotofshows.TobyOrd,Iknowhisnamehascomeupthisweek.JanTallinn.JanTallinn,I'veactually,whowasaco-founderofSkype,averysmartandinterestingguywho'sconcerned.Imean,allthesepeoplehave,whetheryouagreewiththemornot,theyhavesomeconcernsabouttheimpactoftechontheworld.AndTallinnwasaninvestorinathinktankinCambridge,verymuch...asuccessfultechentrepreneur.

Speaker 4

It's called the Center for Existential Risk. So that already tells you where he's at.

Words and timings
It'scalledtheCenterforExistentialRisk.Sothatalreadytellsyouwherehe'sat.

Speaker 3

Well, I mean, Martin Rees, who was formerly the royal astronomer, is very much involved with that. So there are lots of credible scientists who are involved with that. But anyway, go on.

Words and timings
Well,Imean,MartinRees,whowasformerlytheroyalastronomer,isverymuchinvolvedwiththat.Sotherearelotsofcrediblescientistswhoareinvolvedwiththat.Butanyway,goon.

Speaker 4

Well, look, there are a lot of people, including the Google guy you interviewed a couple of weeks ago, who are fearful that AI can get out of control and become bad for the humanity. Jeffrey Hinton. Jeffrey Hinton. Former Google guy. Former Google guy. Who just won the Nobel Prize. Right. So I'm not trying to say that... you know, these people don't exist and they don't have strongly held views, that they really do hold those views. What I'm saying is they have a very well-funded media campaign to propagate those views. And that's what she documents.

Words and timings
Well,look,therearealotofpeople,includingtheGoogleguyyouinterviewedacoupleofweeksago,whoarefearfulthatAIcangetoutofcontrolandbecomebadforthehumanity.JeffreyHinton.JeffreyHinton.FormerGoogleguy.FormerGoogleguy.WhojustwontheNobelPrize.Right.SoI'mnottryingtosaythat...youknow,thesepeopledon'texistandtheydon'thavestronglyheldviews,thattheyreallydoholdthoseviews.WhatI'msayingistheyhaveaverywell-fundedmediacampaigntopropagatethoseviews.Andthat'swhatshedocuments.

Speaker 3

Hold on, let me ask this. So, okay, there are people who strongly believe that AI is dangerous and they are creating interest groups, nonprofits to do this. Is she suggesting that there is some sort of organized conspiracy, that there is some... a central committee of the anti-AI brigade who meet globally. Maybe George Soros is involved with this too.

Words and timings
Holdon,letmeaskthis.So,okay,therearepeoplewhostronglybelievethatAIisdangerousandtheyarecreatinginterestgroups,nonprofitstodothis.Isshesuggestingthatthereissomesortoforganizedconspiracy,thatthereissome...acentralcommitteeoftheanti-AIbrigadewhomeetglobally.MaybeGeorgeSorosisinvolvedwiththistoo.

Speaker 4

Yeah, she's actually got reports out of meetings that were held as part of what she presents. So there are such meetings, and they focused on specific targets in the EU, for example, to get certain outcomes. You know, I was a political activist. This is all normal stuff.

Words and timings
Yeah,she'sactuallygotreportsoutofmeetingsthatwereheldaspartofwhatshepresents.Sotherearesuchmeetings,andtheyfocusedonspecifictargetsintheEU,forexample,togetcertainoutcomes.Youknow,Iwasapoliticalactivist.Thisisallnormalstuff.

Speaker 3

And it happens on the other side too. I mean, there are lots of pro-tech interest groups, astroturf networks that are funded by Google and OpenAI and all the others. So, I mean, it goes on both sides.

Words and timings
Andithappensontheothersidetoo.Imean,therearelotsofpro-techinterestgroups,astroturfnetworksthatarefundedbyGoogleandOpenAIandalltheothers.So,Imean,itgoesonbothsides.

Speaker 4

Yeah. So I don't think she's using the word conspiracy anywhere, but she's saying an organized attempt to influence opinion around certain negative AI.

Words and timings
Yeah.SoIdon'tthinkshe'susingthewordconspiracyanywhere,butshe'ssayinganorganizedattempttoinfluenceopinionaroundcertainnegativeAI.

Speaker 3

But that's just politics, Keith. That's the nature of things. It's always the case. So to call it... Whether you're calling it... She calls it a manufactured moral panic against AI. It's itself a kind of moral panic, isn't it?

Words and timings
Butthat'sjustpolitics,Keith.That'sthenatureofthings.It'salwaysthecase.Sotocallit...Whetheryou'recallingit...ShecallsitamanufacturedmoralpanicagainstAI.It'sitselfakindofmoralpanic,isn'tit?

Speaker 4

You could argue that, but I think that you've got to take a more nuanced view, which is whether AI is a net positive of the human race, whether there is a genuine risk, certainly any time soon, of it being out of control...

Words and timings
Youcouldarguethat,butIthinkthatyou'vegottotakeamorenuancedview,whichiswhetherAIisanetpositiveofthehumanrace,whetherthereisagenuinerisk,certainlyanytimesoon,ofitbeingoutofcontrol...

Speaker 3

Well, a net positive is two things. A net positive is, at this point, an incredibly complicated question.

Words and timings
Well,anetpositiveistwothings.Anetpositiveis,atthispoint,anincrediblycomplicatedquestion.

Speaker 4

I don't think so.

Words and timings
Idon'tthinkso.

Speaker 3

Well, you don't think so, but many others do. I mean, you're suggesting that anyone who doesn't think that is somehow involved in this manufactured moral panic?

Words and timings
Well,youdon'tthinkso,butmanyothersdo.Imean,you'resuggestingthatanyonewhodoesn'tthinkthatissomehowinvolvedinthismanufacturedmoralpanic?

Speaker 4

Well, look, for that to be true, let's get into the nuts and bolts of it, a word guessing machine would have to turn into a nuclear bomb launching machine. And there's no technical way that that can happen. It's kind of a science fiction on the doomsday side. So the entire narrative is flawed from a technical point of view.

Words and timings
Well,look,forthattobetrue,let'sgetintothenutsandboltsofit,awordguessingmachinewouldhavetoturnintoanuclearbomblaunchingmachine.Andthere'snotechnicalwaythatthatcanhappen.It'skindofasciencefictiononthedoomsdayside.Sotheentirenarrativeisflawedfromatechnicalpointofview.

Speaker 3

Oh, OK. So I would strongly disagree, but we've been over this lots. You don't know. I mean, why? Well, I know as much or as little as you do.

Words and timings
Oh,OK.SoIwouldstronglydisagree,butwe'vebeenoverthislots.Youdon'tknow.Imean,why?Well,Iknowasmuchoraslittleasyoudo.

Speaker 4

I doubt that. Well, try to articulate how that could happen in terms of actual things that could happen.

Words and timings
Idoubtthat.Well,trytoarticulatehowthatcouldhappenintermsofactualthingsthatcouldhappen.

Speaker 3

I mean, let's focus on the AI panic campaign.

Words and timings
Imean,let'sfocusontheAIpaniccampaign.

Speaker 3

What are they saying? I mean, if what you're saying is they're entirely wrong and they don't know what they're talking about and there are leading scientists involved, I don't know, Tallinn, Martin Rees perhaps, then are they just wrong or do they have some other agenda?

Words and timings
Whataretheysaying?Imean,ifwhatyou'resayingisthey'reentirelywrongandtheydon'tknowwhatthey'retalkingaboutandthereareleadingscientistsinvolved,Idon'tknow,Tallinn,MartinReesperhaps,thenaretheyjustwrongordotheyhavesomeotheragenda?

Speaker 4

They basically are both. They're certainly wrong. I don't know what their agenda is other than government control over AI, and I don't know why they would have that agenda. I'm not her member. I'm showing you what she's written and her research. It is the case that the kind of headlines that they're supporting are doomsday headlines. One of them in a Times op-ed, and this was an op-ed by a guy called Yudkowsky, We Need to Shut It All Down, was the headline, and it was from the Campaign for AI Safety. So there is clearly a set of opinions that... Handbook of Communicating the Existential Risk from AI is a handbook that teaches their converts how to talk about that.

Words and timings
Theybasicallyareboth.They'recertainlywrong.Idon'tknowwhattheiragendaisotherthangovernmentcontroloverAI,andIdon'tknowwhytheywouldhavethatagenda.I'mnothermember.I'mshowingyouwhatshe'swrittenandherresearch.Itisthecasethatthekindofheadlinesthatthey'resupportingaredoomsdayheadlines.OneoftheminaTimesop-ed,andthiswasanop-edbyaguycalledYudkowsky,WeNeedtoShutItAllDown,wastheheadline,anditwasfromtheCampaignforAISafety.Sothereisclearlyasetofopinionsthat...HandbookofCommunicatingtheExistentialRiskfromAIisahandbookthatteachestheirconvertshowtotalkaboutthat.

Speaker 3

So you're suggesting it's an anti-AI cult organized by Yudinsky or some other Insky, Moskowitz. It all sounds a little like the Protocols of Zion or something.

Words and timings
Soyou'resuggestingit'sananti-AIcultorganizedbyYudinskyorsomeotherInsky,Moskowitz.ItallsoundsalittleliketheProtocolsofZionorsomething.

Speaker 4

Yeah, I think if you define a cult as a failing attempt to create a mass movement, yes, that is what it is.

Words and timings
Yeah,Ithinkifyoudefineacultasafailingattempttocreateamassmovement,yes,thatiswhatitis.

Speaker 3

Do you think George Soros is involved, Keith? Somehow, I bet he is. His name doesn't come up at all, so I doubt it. What about the man of the moment, Epstein? Is he involved? He's dead, Andrew. I bet if he was, he'd be involved, wouldn't he? If he wasn't dead, what about Prince Andrew?

Words and timings
DoyouthinkGeorgeSorosisinvolved,Keith?Somehow,Ibetheis.Hisnamedoesn'tcomeupatall,soIdoubtit.Whataboutthemanofthemoment,Epstein?Isheinvolved?He'sdead,Andrew.Ibetifhewas,he'dbeinvolved,wouldn'the?Ifhewasn'tdead,whataboutPrinceAndrew?

Speaker 4

So read this. The campaign for AI safety came to the following conclusions. This is documented. One, convincing the public that AI is a danger, should be a priority, but logical arguments alone may be insufficient. Creating urgency around AI. Let's just be clear, who wrote this to who?

Words and timings
Soreadthis.ThecampaignforAIsafetycametothefollowingconclusions.Thisisdocumented.One,convincingthepublicthatAIisadanger,shouldbeapriority,butlogicalargumentsalonemaybeinsufficient.CreatingurgencyaroundAI.Let'sjustbeclear,whowrotethistowho?

Speaker 3

The Campaign for AI Safety published it in a public document. And who is on this campaign? Who's financing this campaign? Is this Moskowitz again? He's one of the people that funds it. Well, it's a public interest group. I mean, Gates published it, but this sounds like the sort of stuff that the MAGA people talk about Bill Gates, because if he funds something and they say it, then Gates must be behind it.

Words and timings
TheCampaignforAISafetypublisheditinapublicdocument.Andwhoisonthiscampaign?Who'sfinancingthiscampaign?IsthisMoskowitzagain?He'soneofthepeoplethatfundsit.Well,it'sapublicinterestgroup.Imean,Gatespublishedit,butthissoundslikethesortofstuffthattheMAGApeopletalkaboutBillGates,becauseifhefundssomethingandtheysayit,thenGatesmustbebehindit.

Speaker 4

They did a survey called AI Doom Prevention Message Testing, and they tested various messages with people. Control AI before it controls you was one of them. So... You know, you're right, it's a public interest group, but it's very much against the public interest. It's trying to demonize a technology.

Words and timings
TheydidasurveycalledAIDoomPreventionMessageTesting,andtheytestedvariousmessageswithpeople.ControlAIbeforeitcontrolsyouwasoneofthem.So...Youknow,you'reright,it'sapublicinterestgroup,butit'sverymuchagainstthepublicinterest.It'stryingtodemonizeatechnology.

Speaker 3

I mean, not in their view. They think that they're saving the world from a dangerous technology. Again, it's not clear to me what the agenda is. Why are they doing it then?

Words and timings
Imean,notintheirview.Theythinkthatthey'resavingtheworldfromadangeroustechnology.Again,it'snotcleartomewhattheagendais.Whyaretheydoingitthen?

Speaker 4

Who knows? I'm puzzled as well.

Words and timings
Whoknows?I'mpuzzledaswell.

Speaker 3

So you have no idea why they're doing it, but they are doing it. I mean, Dustin Moskowitz, why he could spend his money on swimming pools or fancy cars. Why is he spending his money on this? Well, when you read what they're putting out, I'm pretty sure you're going to disagree with them. But that doesn't mean that it's a plot or a conspiracy.

Words and timings
Soyouhavenoideawhythey'redoingit,buttheyaredoingit.Imean,DustinMoskowitz,whyhecouldspendhismoneyonswimmingpoolsorfancycars.Whyishespendinghismoneyonthis?Well,whenyoureadwhatthey'reputtingout,I'mprettysureyou'regoingtodisagreewiththem.Butthatdoesn'tmeanthatit'saplotoraconspiracy.

Speaker 4

No one's saying it's a plot. Neither is she saying it's a plot. She's saying there's an attempt to create a panic.

Words and timings
Noone'ssayingit'saplot.Neitherisshesayingit'saplot.She'ssayingthere'sanattempttocreateapanic.

Speaker 4

By the way, she's not even guessing that. It's a PR. They say so. They're pretty transparent about saying that's what their goal is.

Words and timings
Bytheway,she'snotevenguessingthat.It'saPR.Theysayso.They'reprettytransparentaboutsayingthat'swhattheirgoalis.

Speaker 3

There was an interesting piece, Keith, this week in the journal about two kinds of business models now developing on the AI front. One from Anthropic, which is on track, and I'm quoting from the headline.

Words and timings
Therewasaninterestingpiece,Keith,thisweekinthejournalabouttwokindsofbusinessmodelsnowdevelopingontheAIfront.OnefromAnthropic,whichisontrack,andI'mquotingfromtheheadline.

Speaker 4

By the way, Jeff Jarvis just left a comment. on Facebook saying, I'm with you, Keith, and with Nert Weissblatt.

Words and timings
Bytheway,JeffJarvisjustleftacomment.onFacebooksaying,I'mwithyou,Keith,andwithNertWeissblatt.

Speaker 3

Yeah, he loves Nert Weissblatt because she used to defend the internet, now she's defending AI. Thank you, Geoff. But at least we have one listener.

Words and timings
Yeah,helovesNertWeissblattbecausesheusedtodefendtheinternet,nowshe'sdefendingAI.Thankyou,Geoff.Butatleastwehaveonelistener.

Speaker 3

Is Anthropic connected with this, Keith? You seem to, before we went live, you suggested that somehow Anthropic are invested in this manufactured moral panic plot.

Words and timings
IsAnthropicconnectedwiththis,Keith?Youseemto,beforewewentlive,yousuggestedthatsomehowAnthropicareinvestedinthismanufacturedmoralpanicplot.

Speaker 1

Yes.

Words and timings
Yes.

Speaker 4

Well, firstly, she doesn't mention them, and I have no knowledge that they're involved. But it is widely believed, and they are accused of, being part of the lobby seeking fast and deep government regulation of AI. And that would align with these people. So it is possible. I have a feeling your dog's involved with it as well.

Words and timings
Well,firstly,shedoesn'tmentionthem,andIhavenoknowledgethatthey'reinvolved.Butitiswidelybelieved,andtheyareaccusedof,beingpartofthelobbyseekingfastanddeepgovernmentregulationofAI.Andthatwouldalignwiththesepeople.Soitispossible.Ihaveafeelingyourdog'sinvolvedwithitaswell.

Speaker 3

Why is he barking so much?

Words and timings
Whyishebarkingsomuch?

Speaker 4

I think he found an effective altruist in my backyard.

Words and timings
Ithinkhefoundaneffectivealtruistinmybackyard.

Speaker 3

Yeah, I bet you'll find Dustin Muskowitz hiding under a bush in your Palo Alto backyard.

Words and timings
Yeah,Ibetyou'llfindDustinMuskowitzhidingunderabushinyourPaloAltobackyard.

Speaker 3

But this is coming back. Maybe I'm repeating myself a little bit. You don't seem to give credibility to the fact that people have different opinions about this. Now, you and I disagree on OpenAI. You're very much in the OpenAI camp. I've always been... closer to Anthropic, why isn't the Anthropic business model of trying to focus on turning a profit earlier and less focus on investment? Why isn't that just another credible argument?

Words and timings
Butthisiscomingback.MaybeI'mrepeatingmyselfalittlebit.Youdon'tseemtogivecredibilitytothefactthatpeoplehavedifferentopinionsaboutthis.Now,youandIdisagreeonOpenAI.You'reverymuchintheOpenAIcamp.I'vealwaysbeen...closertoAnthropic,whyisn'ttheAnthropicbusinessmodeloftryingtofocusonturningaprofitearlierandlessfocusoninvestment?Whyisn'tthatjustanothercredibleargument?

Speaker 4

Well, firstly, I'd say I'm not in the OpenAI account vis-a-vis Anthropic. I use Anthropic probably every bit as much as I use OpenAI, and I like Anthropic's technology and software. and I even like listening to the founder on occasion, but I do consider OpenAI to be miles ahead from a competitive point of view and likely to be the leader for the unknown future. Turning a profit's a slightly different topic. My opinion formed from experience is that any startup that turns a profit early in its life is defrauding its investors. Investors don't want you to turn a profit. They want you to grow. That's why they invested in you. If they were cared about a profit, they'd invest in public companies with profit. They're investing in private companies with growth on purpose. And they expect you to use the money to grow even faster and bigger. So turning a profit is, in many ways, it's an insult to your investors.

Words and timings
Well,firstly,I'dsayI'mnotintheOpenAIaccountvis-a-visAnthropic.IuseAnthropicprobablyeverybitasmuchasIuseOpenAI,andIlikeAnthropic'stechnologyandsoftware.andIevenlikelisteningtothefounderonoccasion,butIdoconsiderOpenAItobemilesaheadfromacompetitivepointofviewandlikelytobetheleaderfortheunknownfuture.Turningaprofit'saslightlydifferenttopic.Myopinionformedfromexperienceisthatanystartupthatturnsaprofitearlyinitslifeisdefraudingitsinvestors.Investorsdon'twantyoutoturnaprofit.Theywantyoutogrow.That'swhytheyinvestedinyou.Iftheywerecaredaboutaprofit,they'dinvestinpubliccompanieswithprofit.They'reinvestinginprivatecompanieswithgrowthonpurpose.Andtheyexpectyoutousethemoneytogrowevenfasterandbigger.Soturningaprofitis,inmanyways,it'saninsulttoyourinvestors.

Speaker 3

yeah i'm not sure everyone i mean not sure everyone would agree with this is is is your friend uh near it uh weiss blatt involved with this one too about believing that anyone who tries to turn a profit is somehow doing a disservice to their

Words and timings
yeahi'mnotsureeveryoneimeannotsureeveryonewouldagreewiththisisisisyourfrienduhnearituhweissblattinvolvedwiththisonetooaboutbelievingthatanyonewhotriestoturnaprofitissomehowdoingadisservicetotheir

Speaker 4

investors i don't think she's an economist so i doubt she has an opinion on that at

Words and timings
investorsidon'tthinkshe'saneconomistsoidoubtshehasanopiniononthatat

Speaker 3

least i don't know what i do not acknowledge keith that there are two alternative models it's impossible the wall street journal piece this week on the anthropic versus open ai business model of anthropic investing less and trying to turn a profit earlier is a very interesting one it has a lot of detail um you're not you don't acknowledge that that's a credible dispute between two different kinds of startups

Words and timings
leastidon'tknowwhatidonotacknowledgekeiththattherearetwoalternativemodelsit'simpossiblethewallstreetjournalpiecethisweekontheanthropicversusopenaibusinessmodelofanthropicinvestinglessandtryingtoturnaprofitearlierisaveryinterestingoneithasalotofdetailumyou'renotyoudon'tacknowledgethatthat'sacredibledisputebetweentwodifferentkindsofstartups

Speaker 4

Well, firstly, I don't think Anthropic would agree with that headline if they were asked. I think that's a writer putting that onto them, having read some documents about when it would be possible for them to turn a profit. I think Anthropic is an investing company.

Words and timings
Well,firstly,Idon'tthinkAnthropicwouldagreewiththatheadlineiftheywereasked.Ithinkthat'sawriterputtingthatontothem,havingreadsomedocumentsaboutwhenitwouldbepossibleforthemtoturnaprofit.IthinkAnthropicisaninvestingcompany.

Speaker 4

Imagine if a company that's growing, instead of reinvesting the money to grow faster, kept it in their bank account. That is not a better company. That's a worse company. Now, if they'd stopped growing, it's different. If growing has slowed and they're throwing off a lot of cash, and they can't invest it to grow faster, that's different. But we're not at that stage with AI. We're at the stage where it's very early.

Words and timings
Imagineifacompanythat'sgrowing,insteadofreinvestingthemoneytogrowfaster,keptitintheirbankaccount.Thatisnotabettercompany.That'saworsecompany.Now,ifthey'dstoppedgrowing,it'sdifferent.Ifgrowinghasslowedandthey'rethrowingoffalotofcash,andtheycan'tinvestittogrowfaster,that'sdifferent.Butwe'renotatthatstagewithAI.We'reatthestagewhereit'sveryearly.

Speaker 3

So again, it comes back to the question I was asking earlier. If you believe that it's just a non-issue, that all startups should try to lose as much money as they possibly can, what's Anthropic up to here? Are they trying to ban AI? Are they part of the manufactured moral panic?

Words and timings
Soagain,itcomesbacktothequestionIwasaskingearlier.Ifyoubelievethatit'sjustanon-issue,thatallstartupsshouldtrytoloseasmuchmoneyastheypossiblycan,what'sAnthropicuptohere?AretheytryingtobanAI?Aretheypartofthemanufacturedmoralpanic?

Speaker 4

No, they're not trying to ban it. They're trying to have government regulated in such a way it's called regulatory capture that they could benefit as an incumbent by others seeking to enter being regulated So you don't accept the arguments of their concerns?

Words and timings
No,they'renottryingtobanit.They'retryingtohavegovernmentregulatedinsuchawayit'scalledregulatorycapturethattheycouldbenefitasanincumbentbyothersseekingtoenterbeingregulatedSoyoudon'taccepttheargumentsoftheirconcerns?

Speaker 3

Yes, Andrew.

Words and timings
Yes,Andrew.

Speaker 4

Nirit is listening, and she sent a message to us on Facebook. What did she say? She says, Keith, thank you. Hi, Andrew. A second listener here. Lots of laughs. I won't say I'm defending AI. I'm defending a nuanced balance.

Words and timings
Niritislistening,andshesentamessagetousonFacebook.Whatdidshesay?Shesays,Keith,thankyou.Hi,Andrew.Asecondlistenerhere.Lotsoflaughs.Iwon'tsayI'mdefendingAI.I'mdefendinganuancedbalance.

Speaker 3

We need to get her on the show. Nirit, tell her that she can come on the other show, but we should do a three-way with Nirit, Keith. so to speak. So to speak.

Words and timings
Weneedtogetherontheshow.Nirit,tellherthatshecancomeontheothershow,butweshoulddoathree-waywithNirit,Keith.sotospeak.Sotospeak.

Speaker 4

Sorry, Nirit. And she says, I'm defending a nuanced, balanced AI discourse and fighting the hype and panic.

Words and timings
Sorry,Nirit.Andshesays,I'mdefendinganuanced,balancedAIdiscourseandfightingthehypeandpanic.

Speaker 3

Yeah, well, let's move on. I don't know if I even trust you. I mean, it could be Jeff Jarvis pretending to be Nirit.

Words and timings
Yeah,well,let'smoveon.Idon'tknowifIeventrustyou.Imean,itcouldbeJeffJarvispretendingtobeNirit.

Speaker 4

No, you should be able to see it in your interface on Restream. I can see it.

Words and timings
No,youshouldbeabletoseeitinyourinterfaceonRestream.Icanseeit.

Speaker 3

There's another interesting piece this week, Keith, in the journal, which I send you. Big tech soaring profits have an ugly underside. OpenAI's losses. You seem to just write... And you and I have talked about this endlessly. You seem to just write off OpenAI's losses. But this piece by James McIntosh suggests that actually... This could be in the long run problematic. It's already having an impact on Oracle. The market value of Oracle, I think, has dropped 30% in the last month, perhaps in association with their deal, circular deal with OpenAI. Are you not at all concerned with OpenAI's losses?

Words and timings
There'sanotherinterestingpiecethisweek,Keith,inthejournal,whichIsendyou.Bigtechsoaringprofitshaveanuglyunderside.OpenAI'slosses.Youseemtojustwrite...AndyouandIhavetalkedaboutthisendlessly.YouseemtojustwriteoffOpenAI'slosses.ButthispiecebyJamesMcIntoshsuggeststhatactually...Thiscouldbeinthelongrunproblematic.It'salreadyhavinganimpactonOracle.ThemarketvalueofOracle,Ithink,hasdropped30%inthelastmonth,perhapsinassociationwiththeirdeal,circulardealwithOpenAI.AreyounotatallconcernedwithOpenAI'slosses?

Speaker 4

Well, look, sorry, I just had to jump up because my dog was about to savage my sofa.

Words and timings
Well,look,sorry,Ijusthadtojumpupbecausemydogwasabouttosavagemysofa.

Speaker 4

And Nirit says she'll gladly join the show. You should go and find her on Facebook, Andrew, and send her the link. She could jump in right now.

Words and timings
AndNiritsaysshe'llgladlyjointheshow.YoushouldgoandfindheronFacebook,Andrew,andsendherthelink.Shecouldjumpinrightnow.

Speaker 3

Well, you can.

Words and timings
Well,youcan.

Speaker 4

Can I? Let me just see if I can find the link. Nirit, I'm going to send you a link.

Words and timings
CanI?LetmejustseeifIcanfindthelink.Nirit,I'mgoingtosendyoualink.

Speaker 3

Yeah.

Words and timings
Yeah.

Speaker 4

We may have you jump in, but let me find the link first. It's in my calendar.

Words and timings
Wemayhaveyoujumpin,butletmefindthelinkfirst.It'sinmycalendar.

Speaker 3

Here's the link. The link is easy.

Words and timings
Here'sthelink.Thelinkiseasy.

Speaker 4

Yeah, and come on.

Words and timings
Yeah,andcomeon.

Speaker 4

Oh, that's next week's show, sorry.

Words and timings
Oh,that'snextweek'sshow,sorry.

Speaker 4

Here it is.

Words and timings
Hereitis.

Speaker 3

Get near it in and continue talking, Keith. So before we get near it, let's talk about, and I don't know if she deals with open AI's losses, the ugly side. Do you see no ugly side to this? It just doesn't really matter. The more billions they lose, the better. People are stupid enough to give them money. So what we should do is define losses, because losses is one of those words.

Words and timings
Getnearitinandcontinuetalking,Keith.Sobeforewegetnearit,let'stalkabout,andIdon'tknowifshedealswithopenAI'slosses,theuglyside.Doyouseenouglysidetothis?Itjustdoesn'treallymatter.Themorebillionstheylose,thebetter.Peoplearestupidenoughtogivethemmoney.Sowhatweshoulddoisdefinelosses,becauselossesisoneofthosewords.

Speaker 4

That sounds like Bill Clinton defining sex. Could be. So let's just define losses.

Words and timings
ThatsoundslikeBillClintondefiningsex.Couldbe.Solet'sjustdefinelosses.

Speaker 4

companies at different stages, losses has a completely different meaning. When you're a startup in your growth phase, it means that the money you're spending on your startup have not yet resulted in

Words and timings
companiesatdifferentstages,losseshasacompletelydifferentmeaning.Whenyou'reastartupinyourgrowthphase,itmeansthatthemoneyyou'respendingonyourstartuphavenotyetresultedin

Speaker 4

kind of peak growth. And so what you do is you take the revenue that you earn and after paying your people, and your facilities you invest it in new growth and in your accounting that's a loss but it isn't as if you have a failing business you have a growing business that needs to be fed to grow that's very different than let's say you know uh blockbuster uh

Words and timings
kindofpeakgrowth.Andsowhatyoudoisyoutaketherevenuethatyouearnandafterpayingyourpeople,andyourfacilitiesyouinvestitinnewgrowthandinyouraccountingthat'salossbutitisn'tasifyouhaveafailingbusinessyouhaveagrowingbusinessthatneedstobefedtogrowthat'sverydifferentthanlet'ssayyouknowuhblockbusteruh

Speaker 3

Making a loss. Yeah, you might bring up Kodak. But you seem to be going against all. I mean, so the journal has this interesting piece, very well researched.

Words and timings
Makingaloss.Yeah,youmightbringupKodak.Butyouseemtobegoingagainstall.Imean,sothejournalhasthisinterestingpiece,verywellresearched.

Speaker 3

The Economist this week has a cover story at Leeds with how markets could topple the global economy. There's an excellent piece on the seven deadly sins of corporate exuberance, including too much debt. Are you suggesting that... The Wall Street Journal, The Economist, The FT, they're all wrong on this stuff, that they're concerned about the boom and its impact. Maybe it's a bubble on the world economy. Is this all part of the manufactured moral panic? It isn't a conspiracy.

Words and timings
TheEconomistthisweekhasacoverstoryatLeedswithhowmarketscouldtoppletheglobaleconomy.There'sanexcellentpieceonthesevendeadlysinsofcorporateexuberance,includingtoomuchdebt.Areyousuggestingthat...TheWallStreetJournal,TheEconomist,TheFT,they'reallwrongonthisstuff,thatthey'reconcernedabouttheboomanditsimpact.Maybeit'sabubbleontheworldeconomy.Isthisallpartofthemanufacturedmoralpanic?Itisn'taconspiracy.

Speaker 4

It's just a lack of understanding of Silicon Valley. And you must admit, Andrew, the reason Silicon Valley is successful is because it does understand this.

Words and timings
It'sjustalackofunderstandingofSiliconValley.Andyoumustadmit,Andrew,thereasonSiliconValleyissuccessfulisbecauseitdoesunderstandthis.

Speaker 3

Well, I understand what? How to waste money. How to lose money.

Words and timings
Well,Iunderstandwhat?Howtowastemoney.Howtolosemoney.

Speaker 4

But the economists, the Wall Street Journal, the EFT, they don't understand it. How to spend money to grow. So take OpenAI. Its revenues two years ago were under a billion. This year, they're going to be 20 billion. Next year, apparently, they're going to be 100 billion. So when they spend money to grow, is that a waste of money? It clearly isn't.

Words and timings
Buttheeconomists,theWallStreetJournal,theEFT,theydon'tunderstandit.Howtospendmoneytogrow.SotakeOpenAI.Itsrevenuestwoyearsagowereunderabillion.Thisyear,they'regoingtobe20billion.Nextyear,apparently,they'regoingtobe100billion.Sowhentheyspendmoneytogrow,isthatawasteofmoney?Itclearlyisn't.

Speaker 3

No, but that's... I mean, you're reducing it to sort of kindergarten-level discussion, Keith. No one's arguing that sometimes it's worth investing and that startups aren't profitable. No one's suggesting that. It's just that the level of losses of open AI in the long run, many people concern, is not viable. There's a lot of questions from economists about whether the company could indeed... even in 2029, although who knows what will happen between then and now, could be profitable. I mean, those are legitimate economic questions. They're not some sort of moral panic.

Words and timings
No,butthat's...Imean,you'rereducingittosortofkindergarten-leveldiscussion,Keith.Noone'sarguingthatsometimesit'sworthinvestingandthatstartupsaren'tprofitable.Noone'ssuggestingthat.It'sjustthattheleveloflossesofopenAIinthelongrun,manypeopleconcern,isnotviable.There'salotofquestionsfromeconomistsaboutwhetherthecompanycouldindeed...evenin2029,althoughwhoknowswhatwillhappenbetweenthenandnow,couldbeprofitable.Imean,thosearelegitimateeconomicquestions.They'renotsomesortofmoralpanic.

Speaker 4

Look, they're unanswerable. It's clickbait for their readers. but there's no substance of any economic substance underneath that because no one knows what OpenAI's revenues will be in 2029. Some predictions say it might be as much as 300 billion by then in revenue.

Words and timings
Look,they'reunanswerable.It'sclickbaitfortheirreaders.butthere'snosubstanceofanyeconomicsubstanceunderneaththatbecausenooneknowswhatOpenAI'srevenueswillbein2029.Somepredictionssayitmightbeasmuchas300billionbytheninrevenue.

Speaker 3

When OpenAI go to their investors and say, well, give me a few more billion or give me half a trillion, what would the investors think about?

Words and timings
WhenOpenAIgototheirinvestorsandsay,well,givemeafewmorebillionorgivemehalfatrillion,whatwouldtheinvestorsthinkabout?

Speaker 4

Well, what they do is, and I do this in my business, you come with a model You mean like a Playboy model? What is a model, Keith? The Playboy model? Don't be so cynical.

Words and timings
Well,whattheydois,andIdothisinmybusiness,youcomewithamodelYoumeanlikeaPlayboymodel?Whatisamodel,Keith?ThePlayboymodel?Don'tbesocynical.

Speaker 4

Look, obviously you run your own business. You're running Keenon.

Words and timings
Look,obviouslyyourunyourownbusiness.You'rerunningKeenon.

Speaker 3

I don't know if I'd call that a business. But anyway, go on.

Words and timings
Idon'tknowifI'dcallthatabusiness.Butanyway,goon.

Speaker 4

It is your business. And you have to invest in it to grow. You don't, okay? So it doesn't grow.

Words and timings
Itisyourbusiness.Andyouhavetoinvestinittogrow.Youdon't,okay?Soitdoesn'tgrow.

Speaker 4

If you go look at your Facebook Messenger, there is a link in there.

Words and timings
IfyougolookatyourFacebookMessenger,thereisalinkinthere.

Speaker 3

And if you can click on it, I think... Yeah, now we can hear from the source of the moral panic. But go on. While we're waiting for near it.

Words and timings
Andifyoucanclickonit,Ithink...Yeah,nowwecanhearfromthesourceofthemoralpanic.Butgoon.Whilewe'rewaitingfornearit.

Speaker 4

So there really is a lack of meeting of the minds between traditional... later stage public company economists and Silicon Valley economics, which is a growth-based economics, not a profit-based economics. Profit happens at much greater scale later. So if you look at Google's profit today, it's ginormous. But Google lost money for almost two decades.

Words and timings
Sotherereallyisalackofmeetingofthemindsbetweentraditional...laterstagepubliccompanyeconomistsandSiliconValleyeconomics,whichisagrowth-basedeconomics,notaprofit-basedeconomics.Profithappensatmuchgreaterscalelater.SoifyoulookatGoogle'sprofittoday,it'sginormous.ButGooglelostmoneyforalmosttwodecades.

Speaker 3

All right, almost. What do you mean almost? So you're suggesting Google was founded, what, in 1997, that it was losing money in 2017?

Words and timings
Allright,almost.Whatdoyoumeanalmost?Soyou'resuggestingGooglewasfounded,what,in1997,thatitwaslosingmoneyin2017?

Speaker 4

It was losing money consistently. Amazon the same. Not losing money, spending money on growth. But it's still spending money on growth, Google. But now it has such huge revenues.

Words and timings
Itwaslosingmoneyconsistently.Amazonthesame.Notlosingmoney,spendingmoneyongrowth.Butit'sstillspendingmoneyongrowth,Google.Butnowithassuchhugerevenues.

Speaker 4

The way you do it, and I don't want to teach my grandmother to suck eggs, but there's a thing called gross profit. Gross profit is about unit economics. Unit economics says it cost me this much to build this thing. When you buy it, 90% of what you pay is profit. So gross profit is inherent in open AI.

Words and timings
Thewayyoudoit,andIdon'twanttoteachmygrandmothertosuckeggs,butthere'sathingcalledgrossprofit.Grossprofitisaboutuniteconomics.Uniteconomicssaysitcostmethismuchtobuildthisthing.Whenyoubuyit,90%ofwhatyoupayisprofit.SogrossprofitisinherentinopenAI.

Speaker 3

So coming back to anthropic, are you saying that anthropic that Anthropic's real agenda here in critiquing open AI and in suggesting that they want to focus on profitability, that either they don't get economics or that their real agenda is involving the government and destroying the market somehow. Is that your point?

Words and timings
Socomingbacktoanthropic,areyousayingthatanthropicthatAnthropic'srealagendahereincritiquingopenAIandinsuggestingthattheywanttofocusonprofitability,thateithertheydon'tgeteconomicsorthattheirrealagendaisinvolvingthegovernmentanddestroyingthemarketsomehow.Isthatyourpoint?

Speaker 4

Well, actually, it's Burba Jin who you should point the finger at, not Anthropic. I don't think Anthropic would have written this headline.

Words and timings
Well,actually,it'sBurbaJinwhoyoushouldpointthefingerat,notAnthropic.Idon'tthinkAnthropicwouldhavewrittenthisheadline.

Speaker 3

So it's Burby Gin from the Wall Street Journal. But Anthropic is clearly, and the CEO of Anthropic, Amadai, has made this clear, that they're much more focused on realizing profit. Certainly, Sam Altman seems less concerned. What's his agenda here? Is it to close the market down? Is he wrong? I don't get it.

Words and timings
Soit'sBurbyGinfromtheWallStreetJournal.ButAnthropicisclearly,andtheCEOofAnthropic,Amadai,hasmadethisclear,thatthey'remuchmorefocusedonrealizingprofit.Certainly,SamAltmanseemslessconcerned.What'shisagendahere?Isittoclosethemarketdown?Ishewrong?Idon'tgetit.

Speaker 4

I haven't heard the Anthropic CEO say it's focused on profit. I think it's focused on revenue growth, mainly. and on securing leadership in the B2B AI space, which he's been very successful at.

Words and timings
Ihaven'theardtheAnthropicCEOsayit'sfocusedonprofit.Ithinkit'sfocusedonrevenuegrowth,mainly.andonsecuringleadershipintheB2BAIspace,whichhe'sbeenverysuccessfulat.

Speaker 3

But you've also said that Anthropik is somehow involved in this moral panic.

Words and timings
Butyou'vealsosaidthatAnthropikissomehowinvolvedinthismoralpanic.

Speaker 4

Well, Anthropik seems to have a predilection for asking to be regularly...

Words and timings
Well,Anthropikseemstohaveapredilectionforaskingtoberegularly...

Speaker 3

Oh, look, Nerit's here. So now we can hear from the... Hello, Nerit.

Words and timings
Oh,look,Nerit'shere.Sonowwecanhearfromthe...Hello,Nerit.

Speaker 2

Hi.

Words and timings
Hi.

Speaker 3

Where's your video?

Words and timings
Where'syourvideo?

Speaker 2

I'm in my car and I don't know. I mean, with my phone, so it's not working.

Words and timings
I'minmycarandIdon'tknow.Imean,withmyphone,soit'snotworking.

Speaker 3

OK, well, we're going to hear it from you. Keith is a big fan. And as you know, I'm more skeptical. Tell us what this manufactured moral panic is.

Words and timings
OK,well,we'regoingtohearitfromyou.Keithisabigfan.Andasyouknow,I'mmoreskeptical.Telluswhatthismanufacturedmoralpanicis.

Speaker 2

First, thank you. And thank you for Jeff Davis to ping me on that. Yeah. So to make a long story short, when I saw all the AI is going to kill us all, media tour a lot of people it started like this investigative journey into two things first the ideology like the canon the literature and the second thing is the follow the money type of investigation so i wanted to know why they're setting the agenda why they are like media stars and we see them everywhere also in policy and through that i found the things that keeps linked to which is the ecosystem and the main founders of it and the whole story behind it. So that's like the story of the AI panics aspects and the parts that he showed you.

Words and timings
First,thankyou.AndthankyouforJeffDavistopingmeonthat.Yeah.Sotomakealongstoryshort,whenIsawalltheAIisgoingtokillusall,mediatouralotofpeopleitstartedlikethisinvestigativejourneyintotwothingsfirsttheideologylikethecanontheliteratureandthesecondthingisthefollowthemoneytypeofinvestigationsoiwantedtoknowwhythey'resettingtheagendawhytheyarelikemediastarsandweseethemeverywherealsoinpolicyandthroughthatifoundthethingsthatkeepslinkedtowhichistheecosystemandthemainfoundersofitandthewholestorybehindit.Sothat'slikethestoryoftheAIpanicsaspectsandthepartsthatheshowedyou.

Speaker 3

Well, let me ask this question. I asked Keith, it's probably better to ask you. Is there some organized moral panic? Is it Jan Tallinn? Is it the anthropic people? Is it Dustin Moskovitz? Do they meet in some smoky airport lounge to plan all this near it? What's happening?

Words and timings
Well,letmeaskthisquestion.IaskedKeith,it'sprobablybettertoaskyou.Istheresomeorganizedmoralpanic?IsitJanTallinn?Isittheanthropicpeople?IsitDustinMoskovitz?Dotheymeetinsomesmokyairportloungetoplanallthisnearit?What'shappening?

Speaker 2

No, no, no, I will never suggest such a thing. I don't think there's a conspiracy here. I think maybe Kit said it as well from reading my stuff. What I'm saying is that we are talking about the results of like decades of inflating this ecosystem of AIS risk with a lot of funding and literature and concern. So I'm not saying they're not like sincerely concerned or they're doing it for the money. I'm just saying there are incentives to hype those things to the level that we see them taking over the conversation. So that's what I'm going after.

Words and timings
No,no,no,Iwillneversuggestsuchathing.Idon'tthinkthere'saconspiracyhere.IthinkmaybeKitsaiditaswellfromreadingmystuff.WhatI'msayingisthatwearetalkingabouttheresultsoflikedecadesofinflatingthisecosystemofAISriskwithalotoffundingandliteratureandconcern.SoI'mnotsayingthey'renotlikesincerelyconcernedorthey'redoingitforthemoney.I'mjustsayingthereareincentivestohypethosethingstothelevelthatweseethemtakingovertheconversation.Sothat'swhatI'mgoingafter.

Speaker 3

But doesn't this, and I asked Keith, doesn't this exist on the other side? Is there a lot of nonprofits funded by big AI companies like OpenAI and Google and others who are arguing that OpenAI will benefit mankind? We should all just sit back and enjoy it.

Words and timings
Butdoesn'tthis,andIaskedKeith,doesn'tthisexistontheotherside?IstherealotofnonprofitsfundedbybigAIcompanieslikeOpenAIandGoogleandotherswhoarearguingthatOpenAIwillbenefitmankind?Weshouldalljustsitbackandenjoyit.

Speaker 2

I think you have two sides of this coin. But my investigation, and it's for my next book, is about the rise of the doomers, not the rise of the accelerationists.

Words and timings
Ithinkyouhavetwosidesofthiscoin.Butmyinvestigation,andit'sformynextbook,isabouttheriseofthedoomers,nottheriseoftheaccelerationists.

Speaker 4

You just found a new one in Andrew. You should interview Andrew. He's a doomer. I'm not a doomer. I don't think I am a doomer.

Words and timings
YoujustfoundanewoneinAndrew.YoushouldinterviewAndrew.He'sadoomer.I'mnotadoomer.Idon'tthinkIamadoomer.

Speaker 1

I'm joking.

Words and timings
I'mjoking.

Speaker 4

I'm joking.

Words and timings
I'mjoking.

Speaker 3

I'm doing to him what he does to me, which is winding him up. Is George Soros involved near it?

Words and timings
I'mdoingtohimwhathedoestome,whichiswindinghimup.IsGeorgeSorosinvolvednearit?

Speaker 2

Not as far as I can tell, no.

Words and timings
NotasfarasIcantell,no.

Speaker 3

But don't they have a legitimate case for being at least concerned? Keith's nodding his head, but I didn't ask you, Keith. I'm asking Nirith.

Words and timings
Butdon'ttheyhavealegitimatecaseforbeingatleastconcerned?Keith'snoddinghishead,butIdidn'taskyou,Keith.I'maskingNirith.

Speaker 4

No, I was shaking my head, not nodding.

Words and timings
No,Iwasshakingmyhead,notnodding.

Speaker 3

Yeah, shaking his head. He's saying no. Nirith, there is a legitimate argument to be concerned. Many people are fearful. I mean, Elon Musk historically was deeply fearful. Sam Altman's fearful. So these are legitimate concerns.

Words and timings
Yeah,shakinghishead.He'ssayingno.Nirith,thereisalegitimateargumenttobeconcerned.Manypeoplearefearful.Imean,ElonMuskhistoricallywasdeeplyfearful.SamAltman'sfearful.Sothesearelegitimateconcerns.

Speaker 2

Well, I think my criticism, it's not that we shouldn't have any concerns. That's whatever something that I said. but rather we should balance everything into more nuanced ways. So what I mean by that is if the focus of everybody is about the long-term futuristic hypothetical risks and not dealing with the current ones, it changed what politicians are dealing with, the proposals, their bills, the regulation. So that was like an incentive to say we should focus on many other stuff and not just when they are really fearful of the catastrophic, you know, rogue AI and those type of things, because again, they're hypotheticals and we can look at other things more scientifically and empirically and deal with those stuff. I think that's like my main message.

Words and timings
Well,Ithinkmycriticism,it'snotthatweshouldn'thaveanyconcerns.That'swhateversomethingthatIsaid.butratherweshouldbalanceeverythingintomorenuancedways.SowhatImeanbythatisifthefocusofeverybodyisaboutthelong-termfuturistichypotheticalrisksandnotdealingwiththecurrentones,itchangedwhatpoliticiansaredealingwith,theproposals,theirbills,theregulation.Sothatwaslikeanincentivetosayweshouldfocusonmanyotherstuffandnotjustwhentheyarereallyfearfulofthecatastrophic,youknow,rogueAIandthosetypeofthings,becauseagain,they'rehypotheticalsandwecanlookatotherthingsmorescientificallyandempiricallyanddealwiththosestuff.Ithinkthat'slikemymainmessage.

Speaker 3

I mean, when we talk about AI, everything's hypothetically, both positive and negative. Nobody knows.

Words and timings
Imean,whenwetalkaboutAI,everything'shypothetically,bothpositiveandnegative.Nobodyknows.

Speaker 2

But you don't kill it now when it's instilling in infancy because of the fear.

Words and timings
Butyoudon'tkillitnowwhenit'sinstillingininfancybecauseofthefear.

Speaker 4

My belief, Andrew, is that the conversation here really isn't about who's right and who's wrong. We can all have our opinions about that. I'm strongly in favor of AI experimentation, and I really don't want regulation, but I understand there are other people who take the opposite point of view. I think what Nirit taught me, and I was not aware of, is the chain of value that goes from the EA enthusiast through to this campaign, what the specific messaging is and what their tactics are. I personally don't believe they're very effective. So if they want to be effective altruists, they're not being very effective because their message isn't really landing. Most people are much more prepared to give AI you know, a lot of flexibility to prove how good it can be.

Words and timings
Mybelief,Andrew,isthattheconversationherereallyisn'taboutwho'srightandwho'swrong.Wecanallhaveouropinionsaboutthat.I'mstronglyinfavorofAIexperimentation,andIreallydon'twantregulation,butIunderstandthereareotherpeoplewhotaketheoppositepointofview.IthinkwhatNirittaughtme,andIwasnotawareof,isthechainofvaluethatgoesfromtheEAenthusiastthroughtothiscampaign,whatthespecificmessagingisandwhattheirtacticsare.Ipersonallydon'tbelievethey'reveryeffective.Soiftheywanttobeeffectivealtruists,they'renotbeingveryeffectivebecausetheirmessageisn'treallylanding.MostpeoplearemuchmorepreparedtogiveAIyouknow,alotofflexibilitytoprovehowgooditcanbe.

Speaker 3

And Niri, let me ask you then this connection. Keith keeps on bringing up the effective altruists. As I said, Sam Backman-Fried, the most famous effective altruist in jail. I don't suppose he's involved. Maybe he'll be out soon because he wants Trump to pardon him. But who are these EA people? Is it the Oxford group? Is it Toby Ord? Is it Jan Tallinn? Who are the people most centrally involved in this moral panic against AI?

Words and timings
AndNiri,letmeaskyouthenthisconnection.Keithkeepsonbringinguptheeffectivealtruists.AsIsaid,SamBackman-Fried,themostfamouseffectivealtruistinjail.Idon'tsupposehe'sinvolved.Maybehe'llbeoutsoonbecausehewantsTrumptopardonhim.ButwhoaretheseEApeople?IsittheOxfordgroup?IsitTobyOrd?IsitJanTallinn?WhoarethepeoplemostcentrallyinvolvedinthismoralpanicagainstAI?

Speaker 2

Well, the extreme edge is, of course, Miri. Eliezer Rutkowski and Nate Barrett with their new, if anyone builds it, everyone dies type of method. They are like the extreme end. In the middle, but still hardcore, you have the Max Tegmark SLI of the world with Jan Talon. And in the funding place, yes, you have mainly Dustin Moskovich pouring hundreds of millions a year for a cause, you know, among the causes of this cause.

Words and timings
Well,theextremeedgeis,ofcourse,Miri.EliezerRutkowskiandNateBarrettwiththeirnew,ifanyonebuildsit,everyonediestypeofmethod.Theyareliketheextremeend.Inthemiddle,butstillhardcore,youhavetheMaxTegmarkSLIoftheworldwithJanTalon.Andinthefundingplace,yes,youhavemainlyDustinMoskovichpouringhundredsofmillionsayearforacause,youknow,amongthecausesofthiscause.

Speaker 3

Is Moskowitz, is he actively involved? Does he talk to the others? Do they have meetings? I have no idea. But he's funding, would it be fair to say, Niraj, he's funding non-profits in the same way Bill Gates funds non-profits in the environment. He's not necessarily speaking for these organizations.

Words and timings
IsMoskowitz,isheactivelyinvolved?Doeshetalktotheothers?Dotheyhavemeetings?Ihavenoidea.Buthe'sfunding,woulditbefairtosay,Niraj,he'sfundingnon-profitsinthesamewayBillGatesfundsnon-profitsintheenvironment.He'snotnecessarilyspeakingfortheseorganizations.

Speaker 2

I mean, if you look at the call for applications, If you say, I want you to deal with the catastrophic risk in the long-term safety of Frontier AI, it gives you that result of organizations and more organizations and, like, hundreds of them a year that's doing that because the incentive of getting the money from Open Philatomy is there. So he doesn't need to dictate anything about what they're doing, but he's sending money to a specific place.

Words and timings
Imean,ifyoulookatthecallforapplications,Ifyousay,Iwantyoutodealwiththecatastrophicriskinthelong-termsafetyofFrontierAI,itgivesyouthatresultoforganizationsandmoreorganizationsand,like,hundredsofthemayearthat'sdoingthatbecausetheincentiveofgettingthemoneyfromOpenPhilatomyisthere.Sohedoesn'tneedtodictateanythingaboutwhatthey'redoing,buthe'ssendingmoneytoaspecificplace.

Speaker 3

And what about on the other side? I mean, aren't there people doing the same thing?

Words and timings
Andwhataboutontheotherside?Imean,aren'ttherepeopledoingthesamething?

Speaker 4

Less organized, Andrew, because on the other side, you don't really have to fund a propaganda campaign because it's already received wisdom that AI is a good thing. You can see that with the demand and the amount of money we're all spending on it. It doesn't need to be boosted. whereas the negative message does need to be boosted.

Words and timings
Lessorganized,Andrew,becauseontheotherside,youdon'treallyhavetofundapropagandacampaignbecauseit'salreadyreceivedwisdomthatAIisagoodthing.Youcanseethatwiththedemandandtheamountofmoneywe'reallspendingonit.Itdoesn'tneedtobeboosted.whereasthenegativemessagedoesneedtobeboosted.

Speaker 3

And what about, where's Musk on this, Nirit? Musk was involved, he was the co-founder of OpenAI because he was fearful of the impact of AI. So he was an original doomer. And so in a sense was Sam Altman. Where's Musk in your equation, Nirit?

Words and timings
Andwhatabout,where'sMuskonthis,Nirit?Muskwasinvolved,hewastheco-founderofOpenAIbecausehewasfearfuloftheimpactofAI.Sohewasanoriginaldoomer.AndsoinasensewasSamAltman.Where'sMuskinyourequation,Nirit?

Speaker 2

He's a very interesting person. He also funded the initial $10 million that got SLI start working, the Future of Life Institute. And then Vitalik Buttering poured more than half a billion and turned this, you know, two, four million dollars organization to more than half a billion dollar one. And Elon Musk left that area of those think tanks and non-profits. And I think in here, I would more call it like a messiah labor complex type of thing that, okay, I understand that it's so powerful, then I need to be the one who's going to build it and make it safe and benefit humanity. I think that's more like his story now.

Words and timings
He'saveryinterestingperson.Healsofundedtheinitial$10millionthatgotSLIstartworking,theFutureofLifeInstitute.AndthenVitalikButteringpouredmorethanhalfabillionandturnedthis,youknow,two,fourmilliondollarsorganizationtomorethanhalfabilliondollarone.AndElonMuskleftthatareaofthosethinktanksandnon-profits.AndIthinkinhere,Iwouldmorecallitlikeamessiahlaborcomplextypeofthingthat,okay,Iunderstandthatit'ssopowerful,thenIneedtobetheonewho'sgoingtobuilditandmakeitsafeandbenefithumanity.Ithinkthat'smorelikehisstorynow.

Speaker 4

Well, the other thing is Grok. I used it today. That video you got from me, Andrew, was done on Grok. Grok has massively upgraded its capabilities and is now a genuine contender to be an OpenAI competitor.

Words and timings
Well,theotherthingisGrok.Iusedittoday.Thatvideoyougotfromme,Andrew,wasdoneonGrok.GrokhasmassivelyupgradeditscapabilitiesandisnowagenuinecontendertobeanOpenAIcompetitor.

Speaker 3

So were you suggesting that Musk's moral panic is only convenient when he doesn't own the AI, and now he does, he's less of a moral panicker?

Words and timings
SowereyousuggestingthatMusk'smoralpanicisonlyconvenientwhenhedoesn'towntheAI,andnowhedoes,he'slessofamoralpanicker?

Speaker 4

He's definitely less of a moral panicker, although he's a fairly nuanced guy. He does, on occasion, talk about the need for guardrails. For example, when he was voted to get his trillion-dollar package if he delivers eight and a half trillion of value to tesla one of the reasons he wanted it was to have voting capability for the um the robotic force that he's building. He's talking about a million robots and he wants to have control to pull the plug if necessary. So he certainly still reserves judgment, but he's building it nonetheless. So he obviously believes he can avoid a bad outcome.

Words and timings
He'sdefinitelylessofamoralpanicker,althoughhe'safairlynuancedguy.Hedoes,onoccasion,talkabouttheneedforguardrails.Forexample,whenhewasvotedtogethistrillion-dollarpackageifhedeliverseightandahalftrillionofvaluetoteslaoneofthereasonshewanteditwastohavevotingcapabilityfortheumtheroboticforcethathe'sbuilding.He'stalkingaboutamillionrobotsandhewantstohavecontroltopulltheplugifnecessary.Sohecertainlystillreservesjudgment,buthe'sbuildingitnonetheless.Soheobviouslybelieveshecanavoidabadoutcome.

Speaker 3

I mean, Jeffrey Hinton was on the show a couple of months ago, who won the Nobel Prize, one of the father figures, supposedly the godfather of AI. Is Hinton part of this moral panic? When he came on the show, he gave AI about a 10% or 15% chance of destroying humanity. Is he part of this network, of this plot?

Words and timings
Imean,JeffreyHintonwasontheshowacoupleofmonthsago,whowontheNobelPrize,oneofthefatherfigures,supposedlythegodfatherofAI.IsHintonpartofthismoralpanic?Whenhecameontheshow,hegaveAIabouta10%or15%chanceofdestroyinghumanity.Ishepartofthisnetwork,ofthisplot?

Speaker 2

I won't call it that. I think that he and Yoshua Bengi, both at the same time, February of 2023 got really panicked about the capabilities, about how fast everything advanced. And they both experienced sort of like an Oppenheimer syndrome of we created a monster and now we need to tame it type of thing. So they both really went into the exit realm with that in mind. I think that can explain some of his behavior.

Words and timings
Iwon'tcallitthat.IthinkthatheandYoshuaBengi,bothatthesametime,Februaryof2023gotreallypanickedaboutthecapabilities,abouthowfasteverythingadvanced.AndtheybothexperiencedsortoflikeanOppenheimersyndromeofwecreatedamonsterandnowweneedtotameittypeofthing.Sotheybothreallywentintotheexitrealmwiththatinmind.Ithinkthatcanexplainsomeofhisbehavior.

Speaker 4

The opposite side is Jan LeCun, who announced he's leaving Facebook this week. Jan LeCun believes that LLMs are highly limited and won't be capable of achieving artificial general intelligence, and he's leaving in order to build what he thinks will be able to do it. So he's on the side of, you don't need to be scared, it can't even do what they claim it's going to be able to do.

Words and timings
TheoppositesideisJanLeCun,whoannouncedhe'sleavingFacebookthisweek.JanLeCunbelievesthatLLMsarehighlylimitedandwon'tbecapableofachievingartificialgeneralintelligence,andhe'sleavinginordertobuildwhathethinkswillbeabletodoit.Sohe'sonthesideof,youdon'tneedtobescared,itcan'tevendowhattheyclaimit'sgoingtobeabletodo.

Speaker 3

Right, but Nirit is writing about this manufactured moral panic against AI. A lot of people are ambivalent, a lot of people change their minds.

Words and timings
Right,butNiritiswritingaboutthismanufacturedmoralpanicagainstAI.Alotofpeopleareambivalent,alotofpeoplechangetheirminds.

Speaker 2

um i think that the andrew if you'll go through my links i think the rabbit hole that i'm in is those rationalist effective artist groups that are really like doing the what i call bait and switch thing where they take people who want to be altruistic and they want to do like let's say animal welfare or help the poor but then inside the movement all they hear is you have to save humanity from the extension rigs from ai and this should be the focus of your career And they take those young minds, they tell them, we have a few years to survive, and you are the chosen few who need to save us. And with this huge mission on their shoulder, they go and create some of those organizations that get funds for open philanthropy. So there's something here in the system that I want to highlight, that there is some kind of indoctrination of those very young minds that they target. It's not sinister. I'm just saying it's happening.

Words and timings
umithinkthattheandrewifyou'llgothroughmylinksithinktherabbitholethati'ministhoserationalisteffectiveartistgroupsthatarereallylikedoingthewhaticallbaitandswitchthingwheretheytakepeoplewhowanttobealtruisticandtheywanttodolikelet'ssayanimalwelfareorhelpthepoorbuttheninsidethemovementalltheyhearisyouhavetosavehumanityfromtheextensionrigsfromaiandthisshouldbethefocusofyourcareerAndtheytakethoseyoungminds,theytellthem,wehaveafewyearstosurvive,andyouarethechosenfewwhoneedtosaveus.Andwiththishugemissionontheirshoulder,theygoandcreatesomeofthoseorganizationsthatgetfundsforopenphilanthropy.Sothere'ssomethinghereinthesystemthatIwanttohighlight,thatthereissomekindofindoctrinationofthoseveryyoungmindsthattheytarget.It'snotsinister.I'mjustsayingit'shappening.

Speaker 3

What about Anthropik's involvement with this? Before you came on the line, Niren, you've saved the show without you. God knows what this show would have been like. It's wonderful to have you. Keith seems to be suggesting that Anthropik are on the Duma side. What's your reading of Anthropik's role in this growing chasm between moral panic against AI and people who believe that it actually can make the world a better place?

Words and timings
WhataboutAnthropik'sinvolvementwiththis?Beforeyoucameontheline,Niren,you'vesavedtheshowwithoutyou.Godknowswhatthisshowwouldhavebeenlike.It'swonderfultohaveyou.KeithseemstobesuggestingthatAnthropikareontheDumaside.What'syourreadingofAnthropik'sroleinthisgrowingchasmbetweenmoralpanicagainstAIandpeoplewhobelievethatitactuallycanmaketheworldabetterplace?

Speaker 2

Well, between all the, you know, Frontier Labs, Anthropik is by far the most Duma-y one. That's like what it's pride itself for, being the most AI safety-focused group that, you know, recruits specifically from EA cycles, use the jargon, you know, ask about people's PDOOM, and they're really into this ecosystem of existential risk. And with that in mind, I think it's also the savior complex of that we are better and that's better.

Words and timings
Well,betweenallthe,youknow,FrontierLabs,AnthropikisbyfarthemostDuma-yone.That'slikewhatit'sprideitselffor,beingthemostAIsafety-focusedgroupthat,youknow,recruitsspecificallyfromEAcycles,usethejargon,youknow,askaboutpeople'sPDOOM,andthey'rereallyintothisecosystemofexistentialrisk.Andwiththatinmind,Ithinkit'salsothesaviorcomplexofthatwearebetterandthat'sbetter.

Speaker 3

Is this Amidai or just the corporate culture anthropic?

Words and timings
IsthisAmidaiorjustthecorporatecultureanthropic?

Speaker 2

Most of them.

Words and timings
Mostofthem.

Speaker 3

Interesting. Keith, do you have any final questions? Nirit's been so generous with her time.

Words and timings
Interesting.Keith,doyouhaveanyfinalquestions?Nirit'sbeensogenerouswithhertime.

Speaker 2

Yeah, actually, we really need to go. So last question, please.

Words and timings
Yeah,actually,wereallyneedtogo.Solastquestion,please.

Speaker 4

No, I don't have any questions, Nirit. And it's great to hear you. We don't know each other. And I discovered your stuff last week, having heard about it.

Words and timings
No,Idon'thaveanyquestions,Nirit.Andit'sgreattohearyou.Wedon'tknoweachother.AndIdiscoveredyourstufflastweek,havingheardaboutit.

Speaker 3

He's a fanboy, Nirit.

Words and timings
He'safanboy,Nirit.

Speaker 4

Be careful around, Keith. Also, I'm detecting that you're Persian, I think.

Words and timings
Becarefularound,Keith.Also,I'mdetectingthatyou'rePersian,Ithink.

Speaker 2

I'm Israeli. The word accent is Israeli.

Words and timings
I'mIsraeli.ThewordaccentisIsraeli.

Speaker 4

Oh, it's Israeli. It sounds very similar to a Persian accent.

Words and timings
Oh,it'sIsraeli.ItsoundsverysimilartoaPersianaccent.

Speaker 3

Well, Nere, you'll have to come more fully on the show. We can do the video. Maybe we'll get Keith and we can have a fuller conversation. But we really appreciate your generosity in calling in from your car. Wonderful conversation. Lovely.

Words and timings
Well,Nere,you'llhavetocomemorefullyontheshow.Wecandothevideo.Maybewe'llgetKeithandwecanhaveafullerconversation.Butwereallyappreciateyourgenerosityincallinginfromyourcar.Wonderfulconversation.Lovely.

Speaker 1

And an important one.

Words and timings
Andanimportantone.

Speaker 3

And I'm thrilled that you are now the hottest thing in Silicon Valley. You're even on the All In podcast.

Words and timings
AndI'mthrilledthatyouarenowthehottestthinginSiliconValley.You'reevenontheAllInpodcast.

Speaker 2

i have various listeners followers and subscribers yeah some of them in government

Words and timings
ihavevariouslistenersfollowersandsubscribersyeahsomeofthemingovernment

Speaker 3

i guess well congratulations and i'll look forward when's the new book out oh it's

Words and timings
iguesswellcongratulationsandi'lllookforwardwhen'sthenewbookoutohit's

Speaker 2

a long time at least a year but i'll be in the show before the book okay we'll be

Words and timings
alongtimeatleastayearbuti'llbeintheshowbeforethebookokaywe'llbe

Speaker 3

in touch good luck with everything thank you so much thank you bye

Words and timings
intouchgoodluckwitheverythingthankyousomuchthankyoubye

Speaker 3

Well, that was good, Keith. We have to thank our friend Jeff Jarvis for that. We'll have to get Jeff on, too. I don't know where Jeff is on this. I'm sure that he's ambivalent.

Words and timings
Well,thatwasgood,Keith.WehavetothankourfriendJeffJarvisforthat.We'llhavetogetJeffon,too.Idon'tknowwhereJeffisonthis.I'msurethathe'sambivalent.

Speaker 4

Jeff is a man of many thoughts, so it could be anything.

Words and timings
Jeffisamanofmanythoughts,soitcouldbeanything.

Speaker 3

I don't know if he's good. That was really an exceptional show, Keith. We started with just the two of us, but then we got Nerit on, who brought us some... some enlightenment on the AI panic. I, as you know, I'm a skeptic. Maybe I'm wrong, but we will see. And no doubt we will come back to this in the not too distant future. Fascinating week. Fascinating conversation, Keith. I will talk next weekend. Thank you so much.

Words and timings
Idon'tknowifhe'sgood.Thatwasreallyanexceptionalshow,Keith.Westartedwithjustthetwoofus,butthenwegotNeriton,whobroughtussome...someenlightenmentontheAIpanic.I,asyouknow,I'maskeptic.MaybeI'mwrong,butwewillsee.Andnodoubtwewillcomebacktothisinthenottoodistantfuture.Fascinatingweek.Fascinatingconversation,Keith.Iwilltalknextweekend.Thankyousomuch.

Speaker 4

Bye.

Words and timings
Bye.

Speaker 1

That was a week. That was a week. Stand back. Think big. Take deep. That was a week.

Words and timings
Thatwasaweek.Thatwasaweek.Standback.Thinkbig.Takedeep.Thatwasaweek.