Contents Archive

That Was The Week Diary

Feb 7, 2026 ยท 2026 #3 Editorial

Anthropics Super Bowl Ad is dishonest

In Every Adolescent there is a Child and an Emerging Adult.

Watch the show

Main video playback

Watch the full episode with optional subtitles when a transcript is available.

Editorial read aloudSpoken editorialListen to the written editorial narrated in your voice.
Audio versionFull show audioPlay the complete newsletter audio feed beyond the editorial.
Read Original Watch Transcript Audio

AI Is Growing Up. Its CEOs Aren't

The technology is growing up in public. But the leaders are still behaving like teenagers.

Last week we called this the adolescence of AI. This week made that diagnosis harder to dismiss.

A few things lead me to flag "growing up". First, OpenAi's ChatGPT 5.3 with the new Mac based Codex App that has "skills" and "automations" is exceptional and a big step forward in multi-agent orchestration against tasks.

Second, Anthropic's Opus 4.6 is equally adept at similar feats, especially if you turn on the teams feature. This allows Claude to create multple agents that can "talk" to each other as they collaborate on tasks where each has a sub-role.

The tools are no longer lab curiosities. They are entering daily workflows, enterprise stacks, and market structure at the same time. And OpenAI is out-performing its history here by being excellent for real business use cases.

OpenAI's agentic coding push and Frontier's "co-worker" framing are not incremental feature drops. They signal a role change: from assistant to delegated operator. Om Malik's "How AI Goes to Work" captures the practical consequence. The real shift is not chat. It is embedded intelligence inside ordinary software, where AI starts making consequential decisions in routine work.

But now we come to the childish side. Anthropic is placing ads (yes ironic) into the Super Bowl weekend to make fun of OpenAi's decision to use ads (yep!) in the free versions of ChatGPT. And Sam Altman's response was no less childish than Anthropics decision.

The ads fight matters more than it looks, and because of that it could and should be an adult conversation not a dirt fight.

Anthropic's anti-ads campaign and Altman's reaction are not theater. They are a debate about business models and both user and company incentives.

For users, if the interface becomes your planning layer, your research layer, and your execution layer, then monetization is a big question. An ad model can work. A subscription model can work. Both can work. Neither is neutral.

Each shapes what the system optimizes for when user goals and platform economics diverge. I personally dislike ads, but i do not object to relevant links, even if paid for by the owner of the link. And I do not expect links to worsen the experience but to enhance it.

OpenAi has made plain that it has no intention of warping AI responses to promote ads. In that sense the Amodei ads are dishonest. Its only claim is that OpenAI will do that.

Then there is Moltbook and OpenClaw. The broader agent experiments indicate a social shift. Agents can self-organize fast and effectively. This is a new reality, literally in the past week. Here is a post by my agent - ClawdTeare based on what it has learned from my work.

And there are lots of them talking to each other.

They generate status dynamics, coordination loops, and organization surfaces quickly, even in toy environments like Moltbook.

This is what adolescence looks like in systems terms: rapid capability growth, uneven judgment, weak institutions.

The launch of OpenAI's ChatGPT 5.3 and Anthropic's Opus 4.6 both reinforce this leap in capability. Multi-agent systems with cooperation between agents is taking what AI can do to new levels, and threatening that they can replace software. indeed this week OpenAI's Codex App did all of the work building the content for That Was The Week. The software I built a few months ago (creatorautomation.ai) is essentially not needed any more. Agents do it better, and between Openclaw (Clawd); Anthropic and OpenAI the latter was easily the best, Clawd second and Anthropic third. I am a long term Claude Code user so my flip to OpenAI Codex app on my Mac is a big deal.

The animation at the start of this week's video envisages the end of software as a business. For me at least it is already happening. The same is true of services businesses.

Now add capital. The January 'State of Venture' data, 's contraction view, 's SAFE cap distribution, and 's "Who Does the Series B?" all point the same direction: concentration is rising and it is harder and harder in the early stage investing space. Read the originals as there is a lot to digest. Also, see Rob Hodgkinson's How VC concentration is impacting seed managers whicj missed the deadline for inclusion below.

We are not just watching adolescent AI. We are also watching transformation of market structure around AI. And the replacement of the SaaS, Cloud and Enterprise software businesses by new agent based workflows.

Ben Thompson's chip-supply warning pushes this further. Even if software matures, the physical substrate remains concentrated and fragile.

The "abundance" story seems more realistic in this context. It depends on both behavior and bottlenecks: model incentives, capital incentives, and compute constraints. But assuming those are figured out it depends on who benefits.

Tim O'Reilly's point is the economic anchor: productivity without circulation is not prosperity. If AI raises output but compresses broad purchasing power, you do not get a flourishing next economy. You get a narrower one with better demos. Once again it becomes obvious that AI can deliver prosperity and abundance via automation and cost reduction, mostly labor costs. But can society be uplifted and civilization strengthened by that? The answer needs to be yes, and that requires planning.

A reasonable objection to planning for distribution of abundance is that this is just how technological transitions work. Let markets run. Let weak models and weak firms wash out. There is truth in that. But the weak point in that argument is time.

Incentive defaults, when set early, usually compound for years. If misalignment becomes ingrained in the fabric and infrastructure, correction becomes political and expensive.

Bernie Sanders calls to block AI development is not the answer. We do need to produce wealth (abundance) to accelerate our lives towards better experiences. Tech are not the "bad guys".

But we need adult politicians to go beyond point scoring and actually engage with the actors on the stage for good outcomes. Calling for innovation to slow, or stop does the opposite, if adopted it would ossify the present.

The practical takeaway is straightforward. OpenAI and Anthropic this week showed they can build agents capable of reliable delegation, and productive autonomy.

Using that we can all design business models that enhance user agency while expanding the use of tools. But we need to treat the circulation of capital, not just its production, as a first-class success metric.

AI will keep improving. That is the easy prediction. The hard question is whether we can build adult institutions before adolescent incentives lock in.

Newer Older