Feb 14, 2026 ยท 2026 #4. Read the transcript grouped by speaker, inspect word-level timecodes, and optionally turn subtitles on for direct video playback
Edit labels for this show, save them in this browser, or download a JSON override for the production folder.
Transcript Playback
AI Explosion
Human Transcript
Timed transcript
Blocks are grouped by speaker for readability. Expand a block to inspect word-level timing.
Speaker 2
Hello, everybody. It's Sunday, February the 15th, 2026. We are back to our weekly tech roundup with my old friend Keith Teer from That Was The Week. This week, Keith leads with a video. So for people listening, you'll have to imagine the video. I'll describe it. For people watching, you will see it. It's the image of a half full glass.
All that silence for people listening who can't see the glass. One glass was half full of delicious-looking water. The other glass was half full of very unpleasant-looking insects. Keith, I assume this was made on AI.
Yeah, and one of the glasses is half... Well, that's the whole point, of course, of the editorial that Keith is... creating a metaphor around. The title of this week's newsletter, that was the week's newsletter, is AI Fallout. And it's a matter of perspective, depends how you look at it. On the one hand, the glass is half full of water. On the other hand, it's half full of spiders. So tell us these two perspectives, Keith, what they mean and why you chose it.
Well, it was really driven by the essay, Something Big is Happening, written by Matt Schumer, that was a fairly grandiose reckoning of what's happened in AI in the last two weeks from his point of view in his job as an engineer. And he made the point that he stopped writing code. A bit like last week, I said, I've abandoned my software that I used to do. That was the week. except he took it much further. And he did it as a Twitter essay, and he got more than 50 million views. So it became viral. Because of that, lots and lots of people responded to it. And there's six essays in this week's essay list. I think more than half of them all are consequences of Matt Schumer's essay, which is saying that The end of, in quotes, white-collar work is much nearer than any of us imagine. And secondly, advice to people, especially engineers, is become the expert in using AI in your environment very quickly because a year from now, everyone will be an expert. But if you do it really quickly, you can be the guy internally or girl internally who is looked at as, you know, the one to go to. And so you can preserve a position within your organization. So some people described it as slop because he probably did use AI to help him write it, but it is coherent slop.
So you've got 50 million views, this Matt... Schumer thing, something big is happening. It's not exactly news. Everybody knows something big is happening. What is he saying that's original or important?
Wow, that's dramatic. Every week, though, Keith, we hear the same message. Every week, this is the biggest week ever in AI. It changes everything. And then next week, we have the same message. When's this going to stop?
So shouldn't we be a little bit more careful? I mean, I take Schumer's point, something big is happening. There's no doubt about that. Although I'm not sure anything bigger is happening this week than last week or next week. Shouldn't we be a little more careful about always saying every week? After a while, people are just going to get bored or sick of hearing about it.
Well, yeah, you can't. The headline disguises the detail. The aha is in the detail, not in the headline. The detail is that, and I don't know how much you want to go into this, so distract me somewhere else if you don't, but Anthropics Opus 4.6 and Codex 5.3 released in the last two weeks are a step change in how you use AI. In the past, you've kind of had this interface where you've typed in prompts and got something back. Now you type in things to be carried out and the agent creates multiple sub-agents, each with different tasks and only comes back to you when it's finished. So you literally, you can sit there for 30 minutes or longer actually and you get a finished product out of it. I put a link in the editorial to venturebets.io, which I built this week in a day. And it's a prediction market for venture capital. And I literally told it what I wanted and it went and did it. And there was a lot of back and forth, maybe half a day's back and forth where I tweaked it. But for the most part, it was written by one of these new models and these new multi-agent approaches with skills. And it is, you know, it's kind of like a polymarket or a Calci level of quality prediction market.
so the glass half full argument and this is your thesis you say fear is an understandable signal but a bad operating system for this moment because the tools released in the past two weeks you just talked about that take the capability from good to beyond belief beyond belief keith i mean you what does that mean beyond
belief well You know, two weeks ago, I wouldn't have believed that I could do what I've done in the last two weeks because I wouldn't have been able to. It would have been impossible. And now it's possible. So, you know, you only really know it if you're using the tool in a specific way that it's designed to help you with. So I would excuse anyone for being on the fear side if they're not using the tools and they don't have a purpose for the tools. But if you are using them and you have a purpose, I use them also for That Was The Week. I've given three examples of I use them this week. One is That Was The Week. The second is this VentureBets.io. And the third one is what I did with my SignalRank work. And it is night and day more productive, better quality, less if not zero mistakes.
Yeah, but I can't comment on the other two, but let's use that was the week. It seems to me on the other end, I mean, I use AI too, but not in the way you do, that it looks the same. The editorial is still a bit sloppy. It often sounds like it's written by AI. What's changed? Is it just you're more efficient?
So in the past, I had a workflow that that involved curating articles from a feed of roughly 250 websites, I would filter them based on a quick eyeball check daily. And I would click yes on the interesting ones. And that would build me a table of contents a bit like the one you just saw. And then I would write in editorially on top of that.
Now I would say two weeks ago that entire process took about, I don't know, five to six hours out of my week. This week it took less than an hour out of my week because what I do now I gave the feeds to an AI agent. It reads them every morning. It auto-selects using quality and depth as its criteria. Anything that it thinks would qualify for my taste test, I then vet that and say yes or no. That takes two minutes every morning. It then builds a draft which iterates daily. It's building it, not me. It publishes that draft to a URL, which is the one I share with you, Andrew, creatorautomation.ai slash current, which is always the most, you know, every day it iterates. And then on Friday, a headline writer looks at it and suggests some headlines.
Yeah, but I could come up with a title, AI fallout, and in the glass half full and glass half empty in about 10 seconds. I mean, why do you need computers to do that?
Well, just to let you know, that title was my title, not its. So it isn't like you're removing your own agency. So it gave some titles. Of all the titles, I didn't really like any of them, but it triggered me to come up with my own title, which is the one we've got. And the measure isn't, is it more intelligent? I mean, it's certainly as intelligent, but the measure is, does it remove effort from an output or a product that still is worthy of your name
being on it? Yeah, I'm not, I don't really understand why. I mean, I like your product, that was the week, but it's a series of links. I'm not sure why it takes you six hours to put together.
So you're trusting the algorithm to... Read these links. And that's the difference. So rather than you spending six hours reading these links and these articles and deciding, curating which pieces get into that was the week, the bots are doing it for you. Is that basically the difference?
Yeah. And building output files, which would normally involve you copying and pasting and stuff like that, which is effort. The actual effort that would go into it is now 100% not my effort.
So that's the glass half full. That's the glass in your image with water. The other image of the empty glass with spiders crawling out of it is the half empty side. You acknowledge that the fear is not irrational. What's happening here? Why are the arguments about the glass being half empty when it comes to AI? How are they credible, Keith?
Well, so there's a bunch of, it comes out of the essays, and there's a bunch of the essays, like Noah Smith has two. One is called the fall of the nerds, and the other is called you are no longer the smartest, you are no longer the smartest type of thing on earth. The coming of AI means that humanity's destiny is mostly out of our own hands.
Yeah, but you're missing the specificity in history of the moment. The moment is that it used to be just science fiction declarations. There's now physical evidence of it happening.
Well, Smith makes an interesting argument. He uses the example of being bitten by his pet rabbit. And he said it was an animal. Now, this is the you are no longer the smartest type of thing on Earth. So he leads with this story about his pet rabbit. It bit him. And he survived and wrote the article. But he said if the pet rabbit had been a tiger, he'd be dead. Is he suggesting in this essay that AI is the tiger and we might be walking into a trap here?
did some calculations this week on the Fermi principle, which is, is there anyone else in the universe? And the head of Google's quantum computing effort, who clearly is not a guy prone to sci-fi. Not a rabbit. He made the point that the only way that a quantum computer could do the number of calculations required which would require more than the total number of atoms in the universe, is if there were parallel universes. And it could draw on resources from parallel universes, which sounds like crazy sci-fi, right? A YouTube video immediately came in and said that Silence in the universe is a function of the fact that when AI meets quantum computing, civilizations get wiped out.
Yeah, again, borrowing from Gandhi, one wonders exactly what a civilization is. But still, I'm not really convinced by this. We have another piece in The Atlantic by Josh Tyren, GL. America isn't ready for what AI will do to jobs. We've been reading these articles for years, too. They haven't quite happened. Has anything happened in the last two weeks, Keith, to suggest that AI is about to wipe out all the lawyers or engineers or software designers?
I think what happens every week is evidence of those issues you know, hasty conclusions becoming justified by more evidence. And you know, this week, for example, Dario Amadai was interviewed a couple of times. He's clearly on a PR binge.
Yeah, and he did make the point that white-collar jobs may be, and by the way, the CEO of Microsoft, Satya Nadella, repeated this. Two years from now, there may be no white-collar jobs left.
Well, they certainly have changed, which is... Well, that's different from saying they've gone. That is Matt Schumer's essay. Now is the time to change, and you better change quickly before it's too late. Otherwise, you'll be one of those that isn't required anymore.
Yeah, I mean, as I said, I tend to be a little more skeptical, but certainly something's happening. What's your position, Keith? You seem to suggest that you think the glass is more half full than half empty. Is that fair?
But I don't think the glass is jobs. I think the glass is what is the human experience post-automation. And so in that sense, I welcome and embrace that jobs end up being unnecessary that are currently necessary. Why? Because it releases time for humans to do more interesting things. And I think the whole history of the human race is the urge to have time to do more interesting things.
I agree. I think humans should be free to make videos of insects crawling out of glasses. Although there was another piece which actually you didn't link to this week that suggested that one of the reasons or the main reason why the American job market or the American employment numbers are actually still quite healthy is because everyone's working in healthcare. So I'm not sure... Maybe AI will change some stuff, maybe in the software business, but certainly when it comes to healthcare, there are always going to be a lot of white-collar jobs, for better or worse. It's probably mostly for the worse. You mentioned Amidai, Keith. Anthropic had another good week, FT you linked to, Anthropic's breakout moment. Are you beginning to acknowledge that Anthropic is a real competitor to OpenAI? In the past, you haven't, I don't think.
I wouldn't say I haven't acknowledged it as a competitor. I just haven't, I haven't, the facts don't support it. We're operating on the same level. Well, look, I'm very pro-anthropic and I think anthropic is great, but it's different. Uh, and it's good at what it does, which is basically, um, Claude Code, which was a path breaker, I think. OpenAI is playing catch up there. And with the launch of its Codex app on the Mac a couple of weeks ago, I think it has actually caught up on my own use case. I've abandoned Claude Code in favor of Codex in the last two weeks because I think it's better. So it's a completely contested space. They're both players in it. Anthropic clearly is an enterprise-facing revenue generator, and OpenAI is a consumer-facing revenue generator for the most part. So they're different. Anthropic is about one-third to one-quarter of the size of OpenAI, measured by revenue and by valuation. but that's closer than it used to be. So, you know, they're both impressive companies and there's nothing bad to say about either one of them.
Well, the New York Times, you didn't put it in your piece. I sent it to you, another New York Times piece about OpenAI's biggest challenge is turning its AI into a cash machine. We've had lots of pieces like that over the last few months. Your post of the week, Keith, is from your friend O'Malik. entitled Mad Money and the Big AI Race, in which Ohm seems to be suggesting that Anthropic has turned its AI into a cash machine, whereas OpenAI is struggling. Is that fair, what Ohm's saying?
Well, it's fair that he's saying it. And I put it there because it is part of what we should be talking about. I think they're both turning what they do into cash machines. And I think Anthropic's doing it at a fairly faster pace measured by its starting point. It's 10x its revenue two years in a row. So clearly that's fantastic growth. But OpenAI is no slouch. It's got a much bigger base and it's still more than doubling its revenue.
I've asked you this before. Why are there so many skeptical economic pieces on open AI? Is it because people don't like Sam Altman? Because they're against open AI? Because they're envious? Or is there a real case to be made?
No, I think there's a certain common sense that kicks in whenever there's dramatic change like this. Another word for common sense might be disbelief. And I think the natural starting point is disbelief and being aghast at the scale of things. And so there's a natural tendency to assume that OpenAI is the main evidence of a bubble. But the minute you start putting numbers and growth curves and start looking at the facts, OpenAI's actual spending, versus not projected spending, but actual spending versus its revenue growth certainly depicts a very healthy company. So I just think it's natural instinct when something is so much bigger. I mean, think about it. OpenAI has got to be as big as companies that took 10 years to get to be as big as they are. OpenAI got that big in one year, which just seems impossible. But it's happening.
It's the week after the Super Bowl. We talked last week about the advertising, how angry Sam Altman appeared to be at Anthropik's ad during the Super Bowl mocking OpenAI's embrace of advertising. A lot of people suggest a degree of hypocrisy on the part of Anthropik. Has there been much... much post-Super Bowl reaction, or we've just moved on to the next big thing?
They got a lot of press, yeah. But I don't think it shifted... And what happened this week is OpenAI actually shipped ads in its products for free users. And they were very undramatic and nothing like what Anthropik depicted they would be. So it's become a bit of a non-issue, I think. And by the way, from a revenue point of view, you've got to believe that OpenAI is on the right side of this. Even if you hate ads, which I do, from a revenue point of view, they're certainly on the right side of it. And unless the way they do it undermines their AI, it's not going to be a problem.
My interview of the week, which you always generously add to your newsletter, is from an Oxford political scientist with the wonderful name of Pepper Culpepper. His parents, he told me, were hippies, which explains his first name. who's written a book on billionaire backlash and the reaction, particularly we, of course, talked about Epstein. You never include much on Epstein. Is there any impact, Keith, on the tech community and all these Epstein revelations with Musk, Teal, Bill Gates?
Well, there is. I mean, my own perception is, you know, if you throw a pebble into a, into a pond and it creates those circles that go out from the pebble. I think the Silicon Valley engagement with Epstein is in those outer circles. People like Joey Ito, Reid Hoffman, Jason Calacanis was mentioned this week a couple of times. And the thing about Epstein is he was an influence curator. And most of the people that he sought to get into his orbit had nothing to do with his own sexual proclivities. It was influence gathering, if you will. And so to some extent, it's a non-story, but because the Epstein story itself is so big at the core, anyone who's named in the context gets, you know, Todd with a brush that probably isn't appropriate. But I think it's mostly a non-issue.
Another piece you linked to this week, which I thought was interesting, was a piece in Wired about how Greg Brockman, OpenAI's president, gave, I think, $25 million to Trump. I assume there's a good cop, bad cop routine within OpenAI. So Sam Altman was left out of this. I'm not sure whether he was involved in any way. Anthropic seems more sympathetic to the Democrats. It's hard to imagine Dario Amadai giving money to Trump. To what extent has Anthropic become the company of progressives and OpenAI, the company of conservatives?
I hope not very much. I mean, I think you're right how you characterize it. And certainly in the zeitgeist, that is how it feels. I hope you're wrong because... You said I hope I was right or wrong. No, I hope the zeitgeist is wrong, not you. And the reason is, I think, currently at least, the Democrats represent a desire to regulate. And Anthropic meets them there because it has been broadcasting its belief that there's a need for regulation for a long time now. And by the way, OpenAI has tended to agree with that. But I do think there has not been any regulation. And I probably think there will not be any regulation. So I think the truth is that this is all, you know, favor garnering, not substantive at all. And that, you know, most of these people are more focused on building AI and tech than they are on garnering attention from politicians. So I hope that it isn't as clearly divided as the zeitgeist seems to imply. It might be. I mean, Dario Amadei certainly has doubled down on some of those narratives in recent weeks. And maybe he's, you know, he donated 20 billion to a super PAC, I think.
Another piece of very interesting. Very interesting. What's the difference between billions and millions, Keith? Another very interesting piece that we talked about. I'm not sure if you added it to the newsletter, but it got a lot of press this week. Was Google gearing up to sell a 100-year bond to fund some of their investment in AI? What do you make of this? Are these companies now more powerful financially than governments?
Well, you know, what is a 100-year bond? A 100-year bond is a promise to pay for the next 100 years some rate of interest to somebody who gives you money. And it's like government bonds are like that. You can buy a 30-year bond and get 4% or something like that. So to buy a corporate bond, you have to believe that company is going to be around for 100 years. So it does represent confidence in longevity of Google if you buy that bond. And Google is leveraging that trust to be able to raise enormous amounts of money where they don't have to pay back the principal, only the interest for a long, long period of time. So it's kind of a social contract between Google and the buyers.
Well, so investors can sell bonds on the open market, so they're not locked in to Google's payback time period. They can find another buyer, but then it ends up being the greater fool theory of who is the last one holding the goods. And yeah, I mean, I invest and I probably wouldn't buy this bond unless it was for short-term wealth management where I wanted a locked in percent. You know, you could buy California municipal bonds, which are tax free and get eight ish percent. It has to compete with that.
I saw him a couple of weeks ago in New York. He said to me, they expect in the next year or two, there'll be, what he said, robots on the street. Your startup of the week is Aptronic, a humanoid, whatever that means, robot startup that's raised almost a billion dollars at a $5 billion valuation. Where are we with companies like Aptronic, Keith, in terms of these products actually being ready to go out, so to speak, on the streets? Yeah.
So the two things to think about are, firstly, how dexterous a robot is and can it leverage the fact that the world is built for humanoids to pretty much do anything we can do physically or better? And I think the answer is increasingly yes. They have good balance. They can carry things. They can walk up and down stairways. You know, they can use elevators.
Yeah. Battery life is a problem. We can go longer than them. They're roughly four to five hours battery life, and then they need an hour to recharge. So there's a physical side. And then secondly, there's the intelligence side, which was always lacking. In the past... They could only do repetitive program tasks in fixed environments. With LLMs and reasoning, and now the new models that are emerging, especially continuous learning models, which are under the radar right now, but are the next thing coming, they're going to be able to manage the real world whilst reasoning about the real world and be able to perform tasks in environments that are not fixed. And at that point, they get very close to human capability for being able to adjust to the circumstance and carry out goals. I think that is really- All score goals. Okay, all score goals for that matter, yeah. So we're not there, Andrew. The honest truth is we're not there. Constraints are still important. Fixed environments are relatively still important. But we rapidly approach the point at which that isn't true.
Yeah, and the more you talk, it reiterates that a wonderful novel, I'm not sure if you read it, Kazuo Ishiguro, Anglo-Japanese novelist, Clara and the Sun came out in March of 2021, which imagined a world where you'd have, I guess, what we might think of as humanoid robots. It's harder and harder to distinguish. It's a brilliant novel because it was so believable. And it seems as if, Keith, for better or worse, we are sailing into Clara and the Sun. Is that fair?
The thing we talked about a couple of weeks ago was Clawed, C-L-A-W-D, which is now called Open Claw. And the founder was interviewed by Lex Fridman this week for three hours. But as you look at, I use it. It's increasingly companion-like. You know, it asks me every morning, is there anything I wanted to do for that was the week, for example. And if I... I hope you tell it to piss off. I don't. I say, yeah, can you go and double check the selection of articles is good enough? Why didn't you ask it to replace me?
I don't think anyone could replace you, Andrew. Not even an apptronic robot? Not even an apptronic robot. By the way, Albert's essay is the first essay this week in the newsletter.
I mean, if Albert's watching, I think he'd come up with a sexier title than automated software, some implications. Surely... an AI could do a better headline than that.
Yeah, I don't know what that says about AI's marketing skills. Yeah. In all seriousness, though, Keith, And I'm not going to run the video again because it means that we can't speak. But your video of this glass half full versus this glass crawling with insects is one metaphor. But maybe a better metaphor is the frog one of us humans being in a boiling pot. and we don't quite realize how hot it's got, and we're actually boiling to death until we do indeed boil to death. I think that's maybe one of the points in Clara and the Sun. Do you think that's a useful metaphor, that we're all in this pot now and it seems very warm and pleasant and suddenly we're going to burn up and that will be it for us as a species?
Well, you know, a passive... humanity could certainly have that in its future so the real question comes down to human agency uh you know are we are we passive humans boiling in a pot or do we have agency and certainly my last two weeks i've never worked as hard in my life you said you didn't work hard you say i worked hard on different things So what'd you do with that five hours that you saved on doing that was the week? I trained agents to do more of my jobs. I've now got maybe 20 agents doing specialist jobs for single rank that I used to do myself.
I don't think they can because they need access to my databases, which I give them. They need me to sign off on what they produce, which I do. And if I don't, they don't publish it. So I think my agency has been strengthened, not weakened. So I don't feel like I'm in a boiling pot. I feel like in a way I've gotten out of a mundane mouse in the treadmill life to be much more determining how I use my time on things that have higher value.
So we've got lots of metaphors, mouse on a treadmill, frogs in a pot, insects crawling out of empty glasses. Finally, Keith, Are we going to continue to have these weekly discussions where we basically discuss the same stuff? Is there going to be a moment this year where suddenly it's self-evident that everything's changing rather than just have all these articles that talk about being bitten by rabbits and that sort of thing?
Well, I'd almost turn that back on you, what you think, because you play the role of pushing back appropriately. But I know you also use AI a lot. What's your finger on the pulse of where we are? I mean, I still think we're incredibly early and we're going to be astounded over and over again by things that happen in the future.
Yeah, I mean, and I think there was a report on this, there's a report on everything, but there was a report on this suggesting that people, staff who use AI are working harder and harder. And certainly for me, the more I use AI, and I agree, I mean, it's incredibly useful and saves a huge amount. It not so much saves time, but it allows me to do stuff that I never would have even conceived of trying to do previously, like summarize these... these conversations in like five seconds otherwise i'm not going to spend hours doing it's just not worth it um but our class shall we say keith our ai class or people who understand this technology who embrace it who use it who pay for it we're working harder and harder and again it comes back to another theme that comes up every week uh the gulf between the AI class or the techno class or Silicon Valley or however we want to put it and everybody else is growing larger and larger so I'm certainly not in your camp imagining a world where there's no longer any work in fact the reverse is true I think that for our class there'll be more work for better or worse we want the work it makes us rich and happy but everybody else is going to be out of luck well look
There's no doom there. It's just my observation of two types of people. On the one hand, people who use this technology, who will become busier and busier, and whose lives will actually acquire more meaning and excitement and probably wealth. and those who aren't. I mean, your idea is that somehow technology will liberate us and then everyone will be able to do whatever they want. I just don't buy that.
Well, the phrase out of luck is the same as the phrase no need to work. And the only difference between the two is whether you have a lifestyle that's funded And so I think actually the real discussion is about how human life is supported, especially a life of choice and leisure, which on the face of it are good things. How are those supported by what is largely going to be an automated production of value, value in the non-economic sense, value in the human sense. And I think that you know, it's entirely possible to believe any end game, a terrible one and a fantastic one. And which of those two ends up happening really comes down to what we want to happen and how we act to make it happen.
In other words, the glass is half full. And at least according to Keith Teer, the publisher of That Was The Week, it depends on our agency. We can make our future. We can choose to make our future or choose not to. I hope he's right. We will be back next week, probably talking about why last week was the greatest week in the history of technology and how everything has changed. Thank you so much, Keith.