Missing in Action - Real Leadership
Everyone behaved badly in the fight over who controls AI. The real casualty is the question itself. Who leads AI policy?
Missing in Action - Real Leadership
This week, the most important question in technology - who sets AI policy? - got a definitive answer: nobody. Not because the question wasn't asked. Because every participant who could have provided leadership failed to.
Start with Anthropic. Dario Amodei's internal memo is a remarkable document. He accuses OpenAI of gaslighting, calls Palantir's safety offering "almost entirely safety theater," and claims the Pentagon specifically wanted Claude to analyze bulk commercial data on Americans. Some of these claims may be true. But read the memo carefully and something more troubling emerges: Amodei treats the absence of law as an invitation to fill in the blanks himself.
Sam [Altman]'s description and the DoW description give the strong impression (although we would have to see the actual contract to be certain) that how their contract works is that the model is made available without any legal restrictions ("all lawful use") but that there is a "safety layer", which I think amounts to model refusals, that prevents the model from completing certain tasks or engaging in certain applications. (The Information)
By conflating "all lawful use" with "without any legal restrictions" is the nub of it. All lawful use is already restricted to lawful use, so clearly restricted. He then goes on to add what he considers the additional legal restrictions should be, even showing concern that Hegseth has any authority outside of Anthropic's control:
On autonomous weapons, the DoW claims that "human in the loop is the law", but they are incorrect. It is currently Pentagon policy (set during the Biden administration) that a human has to be in the loop of firing a weapon. But that policy can be changed unilaterally by Pete Hegseth, which is exactly what we are worried about. So it is not, for all intents and purposes, a real constraint. (The Information)
He is "worried about" Hegseth have the authority to change operational rules. Even though, like him or not, Hegseth runs the Dept of War. He would prefer Anthropic to have override on that.
There is no statute governing military AI surveillance. His response is not to advocate for one - it's to impose his own restrictions through a vendor contract and present that as principle. As Ben Thompson argues, if you build an independent power structure that rivals the state's, the state will destroy it. As Dean Ball writes - and Ball is a former Trump AI policy advisor criticizing his own side - operational restrictions in defense contracts are routine, but Anthropic crossed from technical constraints into policy-making. That's not democracy. It's a CEO deciding what the military can and cannot do, based on his personal beliefs about risk, and calling it ethics. The fact that his beliefs may be correct doesn't make the method democratic.
After the memo leaked, Amodei apologized for its tone - specifically for calling OpenAI employees 'gullible' and its supporters 'Twitter morons.' But he called it an 'out-of-date assessment,' not a retraction, and Anthropic simultaneously announced it would sue the Pentagon over the supply chain designation. It was an apology for the language, not the intent. And the intent was to impose an ideological set of beliefs into a vendor contract.
Ironically, the US did actually use Claude in the Iran operation despite the conflict with Anthropic.
Sam Altman played it differently. He publicly supported Amodei - while, per the NYT timeline, already negotiating the deal that would replace Anthropic the moment they walked away. He announced red lines written into a binding contract: no mass surveillance, human responsibility for lethal force. He invited the Pentagon to offer the same terms to all AI companies. On the surface, pragmatic and constructive.
Those red lines amount to "any lawful use" - compliance with the same legal frameworks that Amodei refused. So Altman's bad behavior was not the same as Amodei's.
Altman was not undemocratic. He was something arguably worse: performatively democratic while being strategically opportunistic. He won the contract. He did not win hearts and minds. Short-term gain, long-term credibility cost - because the next time OpenAI announces a principled position, everyone will remember this week. The narrative from OpenAi was about the contract, not about policy. In that sense it was the same as Amodei, abandoning a moment of leadership where Government is held accountable for future policy. Amodei cared about that but seriously misplayed his hand. Altman seemingly did not care. Both abstained from true leadership.
Amadai seems to be ideologically driven... Altman seems to be mainly commercially driven... they're both naughty boys in the playground, leveraging the absence of clarity to their own advantage. Neither one of them is an authoritative leader of opinion with the interests of everyone at heart.
Then there's Pete Hegseth. Palmer Luckey's (and mine from last week) constitutional argument - that elected officials, not corporate executives, should control military AI - has genuine force. Hegseth has the stronger institutional claim. But what is he actually doing with it? He wants to use AI for surveillance and potentially ill-judged military applications. He may have the law on his side, but he has no vision for what AI policy should be.
Hegseth probably is the least culpable of the three. Because he's just doing his job (whether you agree with it or not).
The supply-chain risk designation - tweeted, likely beyond his legal authority, exceeding what even Trump intended according to Zvi Mowshowitz's sources - was an act of political retaliation, not governance. Ball's verdict is devastating: the rational response was to cancel the contract and issue new procurement guidance. Instead, Hegseth went for what Ball calls "corporate murder." The broader administration stance is hands-off - mostly defensible in the early days of a technology - but there is no endgame. No framework. No long-term strategy. Just power exercised through tweets and PAC donations.
Om Malik's essay this week (post of the week) draws the contrast that makes the absence of American leadership most visible. China has an AI policy. Beijing's 15th Five-Year Plan commits to AI dominance across the entire economy - chips, quantum, humanoid robots, open-source AI as a deliberate competitive weapon.
America's response: a pinky promise from seven CEOs not to raise your electricity bill. The question Om poses isn't about authoritarianism versus democracy. It's simpler: where is America's long-term AI strategy? Who trains the talent? Where does compute go? How does AI get woven into manufacturing, healthcare, logistics? Right now the answer is: let the companies figure it out and hope voters don't notice. The government is a buyer with no policy for leading the future.
Apple this week announced a new MacBook Pro with 128GB of memory. That is more than enough to run a very powerful AI model on your laptop. So you can have your own agent running on an AI locally - without any reliance on either major companies or government. That is the core of China's recognition and it is driven by 128 gb unified memory chip sets. Intel and AMD have equivalents to Apple now also.
And then Benn Stancil reminds us what everyone is actually fighting over. Surveillance is not 'Minority Report' - it is a SQL query. The data is already collected. Every click, every movement, every transaction sits in a table somewhere, protected not by encryption but by the tedium of writing 595-line queries. AI doesn't create surveillance capability; it removes the annoyance barrier that was the only thing standing between a database and a surveillance state. The capability exists today regardless of who wins the contract.
Max Tegmark delivers the broadest indictment: every major AI company lobbied against binding regulation while promising to self-regulate. That is the right starting point for any technology. But self-regulation is not the same as no leadership.