Oct 31, 2025 ยท 2025 #41. Read the transcript grouped by speaker, inspect word-level timecodes, and optionally turn subtitles on for direct video playback
Edit labels for this show, save them in this browser, or download a JSON override for the production folder.
Transcript Playback
Can OpenAI Shape Our Future?
Human Transcript
Timed transcript
Blocks are grouped by speaker for readability. Expand a block to inspect word-level timing.
Speaker 3
Hello, everybody. Two and a half years ago, I did a show with a woman called Nirit Weissblatt on how the tech clash has gone too far. At that time, she had a book out. It was called The Tech Clash and Tech Crisis Communication. It wasn't a bad book and it wasn't a bad conversation. I never gave it much thought. But this week, Nirit Weisblatt has become a sensation in Silicon Valley, at least according to Keith Teer, who is always a bit of a sensation in Silicon Valley. That was the week...
I thought you always told me you watched all my shows. You didn't watch that show May 31st, 2022? You must have missed that one. You must have been on holiday or something.
Must have been. All my memory doesn't work very well anymore. But that said, I hadn't heard of her as far as I knew. And her name came up on a bunch of podcasts that I watched, namely the All In podcast.
He's a libertarian technocrat, is what he is. I don't know if that has a left or right label. It could be either one, depending on the topic. But he is right-wing on some things, that's for sure. Anyway, that's not what we're here to talk about. He mentioned her, so I went and drilled down, and she has this prolific writing. I've actually featured, I think, five of her pieces this week, because I think people should go and read them.
I mean, she's made you all panicky, Keith. I've never seen so many articles by the same person. You've got the AI Panic Campaign, part one, part two. Those are the only two I actually include on my slides. But you're saying there are five parts to all this?
Yeah, there's five. The first one is called What Ilya Sutskiva Really Wants. And then in the media section in the newsletter, there's What's Wrong with AI Media Coverage and How to Fix It. And also Your Guide to the Top 10 AI Media Frames. Frame meaning you were framed. So there's quite a bit. And at first I thought she was just a conspiracy theorist and I read the stuff and I think she's accurately describing something real that's happening. I don't think it's successful and therefore non-impactful and therefore ultimately it doesn't matter. But she's pointing the finger at Dustin Moskovich, who was one of the early Facebook winners. I think he's one of the ones that got into a lawsuit with Zuckerberg back in the day. Yeah, he was a co-founder of Facebook. Yeah, he apparently, he's a big, you know, efficient altruism person, and he's a billionaire, and he's underwriting a lot of the propaganda saying that AI is dangerous and needs to be regulated.
You're not acknowledging that there's a legitimate argument again, and not against AI, but there's a legitimate argument about some of the impact of AI and suggesting that it's all some conspiracy by Dustin Moskowitz. Look, I think they're probably genuine people who really believe what they're saying. Dustin clearly does. Who's he working for? Israel? Putin? I mean, this is all bizarre.
I don't think there's any suggestion he's working for anybody other than himself. It's his belief system. He could be shorting stocks. I have no idea what he's doing. But she definitely shows that there is an organized... group with talking points that they discussed beforehand with an attempt to then proliferate that through various media and campaigns.
You're being facetious, Andrew. Let's talk about it properly. Let's talk about it properly. I'm trying to keep a straight face, but this is bizarre. Well, you do acknowledge there is a thing called effective altruism. And it's a group of people.
A movement, yeah. And I've done a lot of shows. Toby Ord, I know his name has come up this week. Jan Tallinn. Jan Tallinn, I've actually, who was a co-founder of Skype, a very smart and interesting guy who's concerned. I mean, all these people have, whether you agree with them or not, they have some concerns about the impact of tech on the world. And Tallinn was an investor in a think tank in Cambridge, very much... a successful tech entrepreneur.
Well, I mean, Martin Rees, who was formerly the royal astronomer, is very much involved with that. So there are lots of credible scientists who are involved with that. But anyway, go on.
Well, look, there are a lot of people, including the Google guy you interviewed a couple of weeks ago, who are fearful that AI can get out of control and become bad for the humanity. Jeffrey Hinton. Jeffrey Hinton. Former Google guy. Former Google guy. Who just won the Nobel Prize. Right. So I'm not trying to say that... you know, these people don't exist and they don't have strongly held views, that they really do hold those views. What I'm saying is they have a very well-funded media campaign to propagate those views. And that's what she documents.
Hold on, let me ask this. So, okay, there are people who strongly believe that AI is dangerous and they are creating interest groups, nonprofits to do this. Is she suggesting that there is some sort of organized conspiracy, that there is some... a central committee of the anti-AI brigade who meet globally. Maybe George Soros is involved with this too.
Yeah, she's actually got reports out of meetings that were held as part of what she presents. So there are such meetings, and they focused on specific targets in the EU, for example, to get certain outcomes. You know, I was a political activist. This is all normal stuff.
And it happens on the other side too. I mean, there are lots of pro-tech interest groups, astroturf networks that are funded by Google and OpenAI and all the others. So, I mean, it goes on both sides.
Yeah. So I don't think she's using the word conspiracy anywhere, but she's saying an organized attempt to influence opinion around certain negative AI.
But that's just politics, Keith. That's the nature of things. It's always the case. So to call it... Whether you're calling it... She calls it a manufactured moral panic against AI. It's itself a kind of moral panic, isn't it?
You could argue that, but I think that you've got to take a more nuanced view, which is whether AI is a net positive of the human race, whether there is a genuine risk, certainly any time soon, of it being out of control...
Well, you don't think so, but many others do. I mean, you're suggesting that anyone who doesn't think that is somehow involved in this manufactured moral panic?
Well, look, for that to be true, let's get into the nuts and bolts of it, a word guessing machine would have to turn into a nuclear bomb launching machine. And there's no technical way that that can happen. It's kind of a science fiction on the doomsday side. So the entire narrative is flawed from a technical point of view.
What are they saying? I mean, if what you're saying is they're entirely wrong and they don't know what they're talking about and there are leading scientists involved, I don't know, Tallinn, Martin Rees perhaps, then are they just wrong or do they have some other agenda?
They basically are both. They're certainly wrong. I don't know what their agenda is other than government control over AI, and I don't know why they would have that agenda. I'm not her member. I'm showing you what she's written and her research. It is the case that the kind of headlines that they're supporting are doomsday headlines. One of them in a Times op-ed, and this was an op-ed by a guy called Yudkowsky, We Need to Shut It All Down, was the headline, and it was from the Campaign for AI Safety. So there is clearly a set of opinions that... Handbook of Communicating the Existential Risk from AI is a handbook that teaches their converts how to talk about that.
So you're suggesting it's an anti-AI cult organized by Yudinsky or some other Insky, Moskowitz. It all sounds a little like the Protocols of Zion or something.
Do you think George Soros is involved, Keith? Somehow, I bet he is. His name doesn't come up at all, so I doubt it. What about the man of the moment, Epstein? Is he involved? He's dead, Andrew. I bet if he was, he'd be involved, wouldn't he? If he wasn't dead, what about Prince Andrew?
So read this. The campaign for AI safety came to the following conclusions. This is documented. One, convincing the public that AI is a danger, should be a priority, but logical arguments alone may be insufficient. Creating urgency around AI. Let's just be clear, who wrote this to who?
The Campaign for AI Safety published it in a public document. And who is on this campaign? Who's financing this campaign? Is this Moskowitz again? He's one of the people that funds it. Well, it's a public interest group. I mean, Gates published it, but this sounds like the sort of stuff that the MAGA people talk about Bill Gates, because if he funds something and they say it, then Gates must be behind it.
They did a survey called AI Doom Prevention Message Testing, and they tested various messages with people. Control AI before it controls you was one of them. So... You know, you're right, it's a public interest group, but it's very much against the public interest. It's trying to demonize a technology.
I mean, not in their view. They think that they're saving the world from a dangerous technology. Again, it's not clear to me what the agenda is. Why are they doing it then?
So you have no idea why they're doing it, but they are doing it. I mean, Dustin Moskowitz, why he could spend his money on swimming pools or fancy cars. Why is he spending his money on this? Well, when you read what they're putting out, I'm pretty sure you're going to disagree with them. But that doesn't mean that it's a plot or a conspiracy.
There was an interesting piece, Keith, this week in the journal about two kinds of business models now developing on the AI front. One from Anthropic, which is on track, and I'm quoting from the headline.
Is Anthropic connected with this, Keith? You seem to, before we went live, you suggested that somehow Anthropic are invested in this manufactured moral panic plot.
Well, firstly, she doesn't mention them, and I have no knowledge that they're involved. But it is widely believed, and they are accused of, being part of the lobby seeking fast and deep government regulation of AI. And that would align with these people. So it is possible. I have a feeling your dog's involved with it as well.
But this is coming back. Maybe I'm repeating myself a little bit. You don't seem to give credibility to the fact that people have different opinions about this. Now, you and I disagree on OpenAI. You're very much in the OpenAI camp. I've always been... closer to Anthropic, why isn't the Anthropic business model of trying to focus on turning a profit earlier and less focus on investment? Why isn't that just another credible argument?
Well, firstly, I'd say I'm not in the OpenAI account vis-a-vis Anthropic. I use Anthropic probably every bit as much as I use OpenAI, and I like Anthropic's technology and software. and I even like listening to the founder on occasion, but I do consider OpenAI to be miles ahead from a competitive point of view and likely to be the leader for the unknown future. Turning a profit's a slightly different topic. My opinion formed from experience is that any startup that turns a profit early in its life is defrauding its investors. Investors don't want you to turn a profit. They want you to grow. That's why they invested in you. If they were cared about a profit, they'd invest in public companies with profit. They're investing in private companies with growth on purpose. And they expect you to use the money to grow even faster and bigger. So turning a profit is, in many ways, it's an insult to your investors.
yeah i'm not sure everyone i mean not sure everyone would agree with this is is is your friend uh near it uh weiss blatt involved with this one too about believing that anyone who tries to turn a profit is somehow doing a disservice to their
least i don't know what i do not acknowledge keith that there are two alternative models it's impossible the wall street journal piece this week on the anthropic versus open ai business model of anthropic investing less and trying to turn a profit earlier is a very interesting one it has a lot of detail um you're not you don't acknowledge that that's a credible dispute between two different kinds of startups
Well, firstly, I don't think Anthropic would agree with that headline if they were asked. I think that's a writer putting that onto them, having read some documents about when it would be possible for them to turn a profit. I think Anthropic is an investing company.
Imagine if a company that's growing, instead of reinvesting the money to grow faster, kept it in their bank account. That is not a better company. That's a worse company. Now, if they'd stopped growing, it's different. If growing has slowed and they're throwing off a lot of cash, and they can't invest it to grow faster, that's different. But we're not at that stage with AI. We're at the stage where it's very early.
So again, it comes back to the question I was asking earlier. If you believe that it's just a non-issue, that all startups should try to lose as much money as they possibly can, what's Anthropic up to here? Are they trying to ban AI? Are they part of the manufactured moral panic?
No, they're not trying to ban it. They're trying to have government regulated in such a way it's called regulatory capture that they could benefit as an incumbent by others seeking to enter being regulated So you don't accept the arguments of their concerns?
Nirit is listening, and she sent a message to us on Facebook. What did she say? She says, Keith, thank you. Hi, Andrew. A second listener here. Lots of laughs. I won't say I'm defending AI. I'm defending a nuanced balance.
We need to get her on the show. Nirit, tell her that she can come on the other show, but we should do a three-way with Nirit, Keith. so to speak. So to speak.
There's another interesting piece this week, Keith, in the journal, which I send you. Big tech soaring profits have an ugly underside. OpenAI's losses. You seem to just write... And you and I have talked about this endlessly. You seem to just write off OpenAI's losses. But this piece by James McIntosh suggests that actually... This could be in the long run problematic. It's already having an impact on Oracle. The market value of Oracle, I think, has dropped 30% in the last month, perhaps in association with their deal, circular deal with OpenAI. Are you not at all concerned with OpenAI's losses?
Get near it in and continue talking, Keith. So before we get near it, let's talk about, and I don't know if she deals with open AI's losses, the ugly side. Do you see no ugly side to this? It just doesn't really matter. The more billions they lose, the better. People are stupid enough to give them money. So what we should do is define losses, because losses is one of those words.
companies at different stages, losses has a completely different meaning. When you're a startup in your growth phase, it means that the money you're spending on your startup have not yet resulted in
kind of peak growth. And so what you do is you take the revenue that you earn and after paying your people, and your facilities you invest it in new growth and in your accounting that's a loss but it isn't as if you have a failing business you have a growing business that needs to be fed to grow that's very different than let's say you know uh blockbuster uh
Making a loss. Yeah, you might bring up Kodak. But you seem to be going against all. I mean, so the journal has this interesting piece, very well researched.
The Economist this week has a cover story at Leeds with how markets could topple the global economy. There's an excellent piece on the seven deadly sins of corporate exuberance, including too much debt. Are you suggesting that... The Wall Street Journal, The Economist, The FT, they're all wrong on this stuff, that they're concerned about the boom and its impact. Maybe it's a bubble on the world economy. Is this all part of the manufactured moral panic? It isn't a conspiracy.
It's just a lack of understanding of Silicon Valley. And you must admit, Andrew, the reason Silicon Valley is successful is because it does understand this.
But the economists, the Wall Street Journal, the EFT, they don't understand it. How to spend money to grow. So take OpenAI. Its revenues two years ago were under a billion. This year, they're going to be 20 billion. Next year, apparently, they're going to be 100 billion. So when they spend money to grow, is that a waste of money? It clearly isn't.
No, but that's... I mean, you're reducing it to sort of kindergarten-level discussion, Keith. No one's arguing that sometimes it's worth investing and that startups aren't profitable. No one's suggesting that. It's just that the level of losses of open AI in the long run, many people concern, is not viable. There's a lot of questions from economists about whether the company could indeed... even in 2029, although who knows what will happen between then and now, could be profitable. I mean, those are legitimate economic questions. They're not some sort of moral panic.
Look, they're unanswerable. It's clickbait for their readers. but there's no substance of any economic substance underneath that because no one knows what OpenAI's revenues will be in 2029. Some predictions say it might be as much as 300 billion by then in revenue.
Well, what they do is, and I do this in my business, you come with a model You mean like a Playboy model? What is a model, Keith? The Playboy model? Don't be so cynical.
So there really is a lack of meeting of the minds between traditional... later stage public company economists and Silicon Valley economics, which is a growth-based economics, not a profit-based economics. Profit happens at much greater scale later. So if you look at Google's profit today, it's ginormous. But Google lost money for almost two decades.
It was losing money consistently. Amazon the same. Not losing money, spending money on growth. But it's still spending money on growth, Google. But now it has such huge revenues.
The way you do it, and I don't want to teach my grandmother to suck eggs, but there's a thing called gross profit. Gross profit is about unit economics. Unit economics says it cost me this much to build this thing. When you buy it, 90% of what you pay is profit. So gross profit is inherent in open AI.
So coming back to anthropic, are you saying that anthropic that Anthropic's real agenda here in critiquing open AI and in suggesting that they want to focus on profitability, that either they don't get economics or that their real agenda is involving the government and destroying the market somehow. Is that your point?
So it's Burby Gin from the Wall Street Journal. But Anthropic is clearly, and the CEO of Anthropic, Amadai, has made this clear, that they're much more focused on realizing profit. Certainly, Sam Altman seems less concerned. What's his agenda here? Is it to close the market down? Is he wrong? I don't get it.
I haven't heard the Anthropic CEO say it's focused on profit. I think it's focused on revenue growth, mainly. and on securing leadership in the B2B AI space, which he's been very successful at.
First, thank you. And thank you for Jeff Davis to ping me on that. Yeah. So to make a long story short, when I saw all the AI is going to kill us all, media tour a lot of people it started like this investigative journey into two things first the ideology like the canon the literature and the second thing is the follow the money type of investigation so i wanted to know why they're setting the agenda why they are like media stars and we see them everywhere also in policy and through that i found the things that keeps linked to which is the ecosystem and the main founders of it and the whole story behind it. So that's like the story of the AI panics aspects and the parts that he showed you.
Well, let me ask this question. I asked Keith, it's probably better to ask you. Is there some organized moral panic? Is it Jan Tallinn? Is it the anthropic people? Is it Dustin Moskovitz? Do they meet in some smoky airport lounge to plan all this near it? What's happening?
No, no, no, I will never suggest such a thing. I don't think there's a conspiracy here. I think maybe Kit said it as well from reading my stuff. What I'm saying is that we are talking about the results of like decades of inflating this ecosystem of AIS risk with a lot of funding and literature and concern. So I'm not saying they're not like sincerely concerned or they're doing it for the money. I'm just saying there are incentives to hype those things to the level that we see them taking over the conversation. So that's what I'm going after.
But doesn't this, and I asked Keith, doesn't this exist on the other side? Is there a lot of nonprofits funded by big AI companies like OpenAI and Google and others who are arguing that OpenAI will benefit mankind? We should all just sit back and enjoy it.
I think you have two sides of this coin. But my investigation, and it's for my next book, is about the rise of the doomers, not the rise of the accelerationists.
Yeah, shaking his head. He's saying no. Nirith, there is a legitimate argument to be concerned. Many people are fearful. I mean, Elon Musk historically was deeply fearful. Sam Altman's fearful. So these are legitimate concerns.
Well, I think my criticism, it's not that we shouldn't have any concerns. That's whatever something that I said. but rather we should balance everything into more nuanced ways. So what I mean by that is if the focus of everybody is about the long-term futuristic hypothetical risks and not dealing with the current ones, it changed what politicians are dealing with, the proposals, their bills, the regulation. So that was like an incentive to say we should focus on many other stuff and not just when they are really fearful of the catastrophic, you know, rogue AI and those type of things, because again, they're hypotheticals and we can look at other things more scientifically and empirically and deal with those stuff. I think that's like my main message.
My belief, Andrew, is that the conversation here really isn't about who's right and who's wrong. We can all have our opinions about that. I'm strongly in favor of AI experimentation, and I really don't want regulation, but I understand there are other people who take the opposite point of view. I think what Nirit taught me, and I was not aware of, is the chain of value that goes from the EA enthusiast through to this campaign, what the specific messaging is and what their tactics are. I personally don't believe they're very effective. So if they want to be effective altruists, they're not being very effective because their message isn't really landing. Most people are much more prepared to give AI you know, a lot of flexibility to prove how good it can be.
And Niri, let me ask you then this connection. Keith keeps on bringing up the effective altruists. As I said, Sam Backman-Fried, the most famous effective altruist in jail. I don't suppose he's involved. Maybe he'll be out soon because he wants Trump to pardon him. But who are these EA people? Is it the Oxford group? Is it Toby Ord? Is it Jan Tallinn? Who are the people most centrally involved in this moral panic against AI?
Well, the extreme edge is, of course, Miri. Eliezer Rutkowski and Nate Barrett with their new, if anyone builds it, everyone dies type of method. They are like the extreme end. In the middle, but still hardcore, you have the Max Tegmark SLI of the world with Jan Talon. And in the funding place, yes, you have mainly Dustin Moskovich pouring hundreds of millions a year for a cause, you know, among the causes of this cause.
Is Moskowitz, is he actively involved? Does he talk to the others? Do they have meetings? I have no idea. But he's funding, would it be fair to say, Niraj, he's funding non-profits in the same way Bill Gates funds non-profits in the environment. He's not necessarily speaking for these organizations.
I mean, if you look at the call for applications, If you say, I want you to deal with the catastrophic risk in the long-term safety of Frontier AI, it gives you that result of organizations and more organizations and, like, hundreds of them a year that's doing that because the incentive of getting the money from Open Philatomy is there. So he doesn't need to dictate anything about what they're doing, but he's sending money to a specific place.
Less organized, Andrew, because on the other side, you don't really have to fund a propaganda campaign because it's already received wisdom that AI is a good thing. You can see that with the demand and the amount of money we're all spending on it. It doesn't need to be boosted. whereas the negative message does need to be boosted.
And what about, where's Musk on this, Nirit? Musk was involved, he was the co-founder of OpenAI because he was fearful of the impact of AI. So he was an original doomer. And so in a sense was Sam Altman. Where's Musk in your equation, Nirit?
He's a very interesting person. He also funded the initial $10 million that got SLI start working, the Future of Life Institute. And then Vitalik Buttering poured more than half a billion and turned this, you know, two, four million dollars organization to more than half a billion dollar one. And Elon Musk left that area of those think tanks and non-profits. And I think in here, I would more call it like a messiah labor complex type of thing that, okay, I understand that it's so powerful, then I need to be the one who's going to build it and make it safe and benefit humanity. I think that's more like his story now.
Well, the other thing is Grok. I used it today. That video you got from me, Andrew, was done on Grok. Grok has massively upgraded its capabilities and is now a genuine contender to be an OpenAI competitor.
He's definitely less of a moral panicker, although he's a fairly nuanced guy. He does, on occasion, talk about the need for guardrails. For example, when he was voted to get his trillion-dollar package if he delivers eight and a half trillion of value to tesla one of the reasons he wanted it was to have voting capability for the um the robotic force that he's building. He's talking about a million robots and he wants to have control to pull the plug if necessary. So he certainly still reserves judgment, but he's building it nonetheless. So he obviously believes he can avoid a bad outcome.
I mean, Jeffrey Hinton was on the show a couple of months ago, who won the Nobel Prize, one of the father figures, supposedly the godfather of AI. Is Hinton part of this moral panic? When he came on the show, he gave AI about a 10% or 15% chance of destroying humanity. Is he part of this network, of this plot?
I won't call it that. I think that he and Yoshua Bengi, both at the same time, February of 2023 got really panicked about the capabilities, about how fast everything advanced. And they both experienced sort of like an Oppenheimer syndrome of we created a monster and now we need to tame it type of thing. So they both really went into the exit realm with that in mind. I think that can explain some of his behavior.
The opposite side is Jan LeCun, who announced he's leaving Facebook this week. Jan LeCun believes that LLMs are highly limited and won't be capable of achieving artificial general intelligence, and he's leaving in order to build what he thinks will be able to do it. So he's on the side of, you don't need to be scared, it can't even do what they claim it's going to be able to do.
um i think that the andrew if you'll go through my links i think the rabbit hole that i'm in is those rationalist effective artist groups that are really like doing the what i call bait and switch thing where they take people who want to be altruistic and they want to do like let's say animal welfare or help the poor but then inside the movement all they hear is you have to save humanity from the extension rigs from ai and this should be the focus of your career And they take those young minds, they tell them, we have a few years to survive, and you are the chosen few who need to save us. And with this huge mission on their shoulder, they go and create some of those organizations that get funds for open philanthropy. So there's something here in the system that I want to highlight, that there is some kind of indoctrination of those very young minds that they target. It's not sinister. I'm just saying it's happening.
What about Anthropik's involvement with this? Before you came on the line, Niren, you've saved the show without you. God knows what this show would have been like. It's wonderful to have you. Keith seems to be suggesting that Anthropik are on the Duma side. What's your reading of Anthropik's role in this growing chasm between moral panic against AI and people who believe that it actually can make the world a better place?
Well, between all the, you know, Frontier Labs, Anthropik is by far the most Duma-y one. That's like what it's pride itself for, being the most AI safety-focused group that, you know, recruits specifically from EA cycles, use the jargon, you know, ask about people's PDOOM, and they're really into this ecosystem of existential risk. And with that in mind, I think it's also the savior complex of that we are better and that's better.
No, I don't have any questions, Nirit. And it's great to hear you. We don't know each other. And I discovered your stuff last week, having heard about it.
Well, Nere, you'll have to come more fully on the show. We can do the video. Maybe we'll get Keith and we can have a fuller conversation. But we really appreciate your generosity in calling in from your car. Wonderful conversation. Lovely.
Well, that was good, Keith. We have to thank our friend Jeff Jarvis for that. We'll have to get Jeff on, too. I don't know where Jeff is on this. I'm sure that he's ambivalent.
Jeff is a man of many thoughts, so it could be anything.
Words and timings
Jeffisamanofmanythoughts,soitcouldbeanything.
Speaker 3
I don't know if he's good. That was really an exceptional show, Keith. We started with just the two of us, but then we got Nerit on, who brought us some... some enlightenment on the AI panic. I, as you know, I'm a skeptic. Maybe I'm wrong, but we will see. And no doubt we will come back to this in the not too distant future. Fascinating week. Fascinating conversation, Keith. I will talk next weekend. Thank you so much.