Hands Off Sam Altman!
The Campaign to Discredit AI
Hands Off Sam Altman!
The Campaign to Discredit AI.
This week Dario Amodei’s Anthropic announced that it had developed a new model, Mythos, capable of outperforming existing cybersecurity software in discovering vulnerabilities and, by implication, exploiting them.
Anthropic chose to restrict release of the model until 40 selected companies had the chance to use it to patch those vulnerabilities.
Amodei won plaudits for that decision. But he still plans to release software he is simultaneously describing as dangerous.
In the same week, The New Yorker published a long, rambling portrait of Sam Altman, depicting him as untrustworthy and slippery. It feeds a media narrative that increasingly seeks to demonize the OpenAI founder.
Altman is moving into Musk-like territory in terms of media frenzy.
This personality-driven circus is mostly a sideshow. It seems motivated by subjective feelings, jealousy, envy, and dislike among them. It is largely devoid of serious discussion about the transformative impact AI is having on our lives, and will have on future generations.
Shallow and gossipy is what comes to mind.
The truth is that we do not have to trust Sam Altman. We do not have to trust Dario Amodei either. What matters is whether science and innovation deliver results. Demis Hassabis, interviewed this week about DeepMind’s AlphaFold breakthrough, feels like a much more pertinent focal point. We have to trust progress.
Anthropic’s Mythos model certainly seems extraordinary. Its ability to discover vulnerabilities that other software had failed to uncover for decades led some to conclude that software approaches to cybersecurity are dead. “Software was lunch. Execution is dinner” was the most memorable line. The meaning is clear enough: AI is becoming useful in its own right, not just as an add-on to existing software.
The market already understands this. Crunchbase’s Q1 data shows capital still flooding into AI at extraordinary scale. Carta’s compensation data shows scarce AI talent being repriced in real time. Andy Jassy’s shareholder letter reads like a full-throated defense of hyperscaler capex as an execution advantage, not a speculative indulgence. Investors are rewarding capability, yes, but more specifically they are rewarding the ability to operationalize capability at scale.
Anthropic’s handling of Mythos is a useful example of the deeper issue. Holding back a powerful model can look responsible. But if that same capability can materially improve cyber defense, restraint may be less effective than deployment. In practice, legitimacy may come less from caution than from solving urgent real-world problems.
Rather than demonizing individual personalities, or lionizing them, the real focus should be execution against real-world problems.
Of course, deployment is not frictionless. Azeem Azhar’s point, that the labs are already rationing access, matters because it reminds us that AI is not yet abundant where it counts.
Bloomberg’s report that OpenAI paused Stargate UK over energy costs and regulation says the same thing from another angle. The Big Technology piece on data center backlash extends it further. Execution is dinner because dinner is physical. It depends on land, power, permits, chips, local politics, and cost. The next phase of AI will be shaped as much by infrastructure constraints as by model advances. That reality may favor more execution-centric systems, including China’s.
And even when the infrastructure exists, the institutions usually don’t. Tyler Akidau’s “We All Built Agents. Nobody Built HR.” and the Fast Company piece on managing AI as a new job both point to the same gap. It is relatively easy to buy intelligence. It is much harder to absorb it. Roles have to change. Accountability has to change. Management has to change. Companies rushed to adopt the tools before they redesigned themselves to live with the consequences. Again, this is far more important than personality profiles of CEOs.
Distribution is part of this too. AI SEO manipulation and the continued degradation of social media both show that deployment is not just about building the capability. It is about controlling the pathways through which people encounter, cite, trust, and depend on it. Retrieval, visibility, and placement increasingly shape outcomes as much as underlying quality. The winners will not just build the strongest systems. They will build the systems that become unavoidable. OpenAI and Anthropic are both good at that, even if parts of the media remain focused on more trivial issues.
None of this means the skeptics are wrong about everything. Every major technology transition looks messy in the middle. Constraints create friction. Security is vulnerable. Organizations may adapt more slowly than they need to. That is possible. This week’s evidence suggests that capability is outrunning the social, organizational, and political machinery needed to absorb AI cleanly. But that is not because of Sam Altman or Dario Amodei, both of whom are building credible businesses with real-world impact.
The winners in AI will be the actors who deploy effectively enough that the world accepts, needs, or cannot resist what they build. Trust will come from applying AI to real problems and producing good outcomes.
If that is right, then the central question is no longer whether Sam Altman or Dario Amodei should be trusted, or AI in the abstract. It is who can get AI into the world, at scale, in forms people depend on, before they can stop it. OpenAI and Anthropic are already on that list. So are Sam Altman and Dario Amodei.