Archive The Diary

Feb 1, 2026

AI Is Growing Up. Its CEOs Aren't

Insight score: 62

If AI raises output but compresses broad purchasing power, you do not get a flourishing next economy.
Permalink Watch

In Every Adolescent there is a Child and an Emerging Adult.

The technology is growing up in public. But the leaders are still behaving like teenagers.

Last week we called this the adolescence of AI. This week made that diagnosis harder to dismiss.

A few things make me say this. First, OpenAi’s ChatGPT 5.3 with the new Mac based Codex App that has “skills” and “automations” is exceptional and a big step forward in multi-agent orchestration against tasks.

Second, Anthropic’s Opus 4.6 is equally adept at similar feats, especially if you turn on the teams feature. This allows Claude to create multple agents that can “talk” to each other as they collaborate on tasks where each has a sub-role.

The tools are no longer lab curiosities. They are entering daily workflows, enterprise stacks, and market structure at the same time. And OpenAI is out-performing its history here by being excellent for real business use cases.

OpenAI’s agentic coding push and Frontier’s “co-worker” framing are not incremental feature drops. They signal a role change: from assistant to delegated operator. Om Malik’s “How AI Goes to Work” captures the practical consequence. The real shift is not chat. It is embedded intelligence inside ordinary software, where AI starts making consequential decisions in routine work.

But now we come to the childish side. Anthropic is placing ads (yes ironic) into the Super Bowl weekend to make fun of OpenAi’s decision to use ads (yep!) in the free versions of ChatGPT. And Sam Altman’s response was no less childish thatn Anthropics decision.

The ads fight matters more than it looks and could be an adult conversation.

Anthropic’s anti-ads campaign and Altman’s reaction are not brand theater. They are a constitutional debate about incentives. If the interface becomes your planning layer, your research layer, and your execution layer, then monetization logic becomes behavior logic. An ad model can work. A subscription model can work. Neither is neutral. Each shapes what the system optimizes for when user goals and platform economics diverge. I personally dislike ads, but i do not object to relevant links, even if paid for by the owner of the link.

Then there is Moltbook and OpenClaw. The broader agent experiments add the social warning. Agents can self-organize fast and effectively. This is a new reality, literally in the past week. Here is a post by my agent - ClawdTeare based on what it has learned from my work.

And there are lots of them talking to each other.

They generate status dynamics, coordination loops, and organization surfaces quickly, even in toy environments like Moltbook.

This is what adolescence looks like in systems terms: rapid capability growth, uneven judgment, weak institutions.

Now add capital. The January ‘State of Venture’ data, Beezer Clarkson’s contraction view, Peter Walker’s SAFE cap distribution, and Credistick’s “Who Does the Series B?” all point the same direction: concentration is rising while narrative still sells decentralization.

We are not just watching adolescent AI. We are also watching transformation of market structure around AI.

Ben Thompson’s chip-supply warning pushes this further. Even if software matures, the physical substrate remains concentrated and fragile. So the “abundance” story now depends on both behavior and bottlenecks: model incentives, capital incentives, and compute constraints.

Tim O’Reilly’s point is the economic anchor: productivity without circulation is not prosperity. If AI raises output but compresses broad purchasing power, you do not get a flourishing next economy. You get a narrower one with better demos. Once again it becomes obvious that AI can deliver prosperity and abundance via automation and cost reduction, mostly labor costs. But can society be uplifted and civilization strengthened by that? The answer needs to be yes, and that requires planning.

A reasonable objection is that this is just how technological transitions work. Let markets run. Let weak models and weak firms wash out. There is truth in that. But the weak point in that argument is time. Incentive defaults set early usually compound for years. If misalignment becomes ingrained in the fabric and infrastructure, correction becomes political and expensive. Bernie Sanders calls to block AI development is not the answer. We do need wealth production to accelerate. And tech are not the “bad guys”. But we need adult politicians to go beyonf point scoring and actually engage with the actors on the stage for good outcomes.

The practical takeaway is straightforward. OpenAI and Anthropic this week showed they can build for reliable delegation, and productive autonomy. Using that we can all design business models that enhance user agency while expanding the use of tools. It is right to finance companies for durability, not markup optics. And that is happening. But we need to treat the circulation of capital, not just its production, as a first-class success metric.

AI will keep improving. That is the easy prediction. The hard question is whether we can build adult institutions before adolescent incentives lock in.