Public Markets Price Outcomes
And Punish Uncertainty
Public Markets Price Outcomes and Punish Uncertainty
Two Public Venture Capital funds listed on the New York Stock Exchange in the past month. The most recent, yesterday, is Fundrise (ticker VCX), the first was Robinhood ventures (ticker RVI). Fundrise priced at just over $34 and is now at $104.50 as I write on Friday morning. A huge 300% rise.
Its basket of private companies includes Anthropic (20.7%), Databricks (17.7%), OpenAI (9.9%) as well as Anduril, SpaceX, Ramp and Epic Games.
Fundrise has been a private market aggregator for several years and has over 100,000 active investors prior to its listing. it typically invests in late stage companies but not so late that they are already fully valued. The market has responded well to its perceived upside potential.
Robinhood priced its shares at $25 a share and is today trading at $23.85 a share, a small discount to its net asset value. But much of its assets are cash, so the discount is understandable. That said its 'names' are less well known than those Fundrise owns and so retail awareness levels are lower.
Three AI companies in these groups are heading toward public markets at the same time. If Anthropic, OpenAI, and SpaceX each offer 15% of their shares, the combined raise will roughly equal every dollar raised across all American IPOs over the past decade. Retail investors want the growth characteristics of private companies, especially in AI and especially if they can name check the companies.
The difference between VCX's first day and Robinhood Ventures is interesting. The market is telling us something clear: retail investors want private market exposure, but they're discriminating. Portfolio quality matters. Brand matters. Trust matters. And those correlate to what you own, when you bought it and the prospects for future value growth.
That word - trust - is doing a lot of work right now. The next test is for the AI companies themselves.
Anthropic's annual recurring revenue surged past $19 billion this month, up from $9 billion at the end of 2025. Six billion dollars was added in February alone, driven almost entirely by Claude Code. Revenue doubled in two months. Meanwhile, the Pentagon filed a 40-page rebuttal arguing that Anthropic's safety "red lines" make it "an unacceptable risk to national security." The concern, stated plainly: Anthropic might "attempt to disable its technology or preemptively alter the behavior of its model" during warfighting operations if it feels its corporate principles are being crossed.
I want to say something direct about this, because I think the conversation has gotten confused. And this is not about politics it is about process.
Companies do not determine the tactics or strategy of war.
My opinions are irrelevant, but for what its worth I don't like war. I believe in nations' right to self-determination.
But if a country is at war, the people conducting that war - through the chain of civilian command - are the decision-makers. Not a CEO in San Francisco. Not an ethics board. Not a corporate red line negotiated during a contract dispute. The tools of war are governed by the people who authorized the war, subject to law, oversight, and democratic accountability. That is how civilian control of the military works. And AI will definitely be used.
Dario Amodei has earned real credibility this year performing what Om Malik calles 'symbolic capitalism'.
ChatGPT uninstalls surged 295% after OpenAI signed the Pentagon deal that Anthropic refused. Claude climbed to number one in the App Store. Employees from OpenAI, Google, and Microsoft filed amicus briefs supporting Anthropic's lawsuit. CNN reports that Anthropic now wins 70% of head-to-head matchups against OpenAI for first-time enterprise buyers. These are not small numbers.
But the principle Amodei is asserting - that a private company should decide which military applications of its technology are acceptable - is one I think we should examine carefully rather than applaud reflexively. If we accept that AI companies can veto military use cases based on their own moral frameworks, we've created a world where unelected technologists hold de facto authority over national security decisions. That might feel good when the technologist shares your values. It won't feel good when the next one doesn't. Nobody likes a "private army" or a private command structure.
The real problem is different. It's not that Anthropic is wrong to care about safety. It's that the people who should care are not doing the governance work.
Congress hasn't planned for our AI future. Vidon Khosla, in this week's post of the week, is doing more.
The executive branch is using procurement fights as a substitute for policy. The judiciary is sorting it out one lawsuit at a time. In the absence of a framework, everyone is improvising - and the improvisation is happening at the speed of contract negotiations, not the speed of democratic deliberation.
This matters for public markets because governance risk is now a pricing variable. Anthropic's court filings reveal that a financial services customer paused a $15 million deal over the Pentagon designation. Eighty million dollars in contracts now require unilateral cancellation rights. A grocery chain simply canceled meetings. When the supply-chain-risk label can vaporize enterprise pipeline overnight, investors have to price that. And when a company's safety principles can trigger government retaliation, that's not an ethics story - it's a balance sheet story. The Anthropic and OpenAI IPOs will be impacted by market reaction to that.
Meanwhile, the infrastructure bet underlying all of this continues to grow. Tomasz Tunguz published numbers this week that deserve attention: for every dollar hyperscalers earn from AI today, they are spending twelve dollars building more capacity. That's $575 billion in capital expenditure this year. Amazon, Microsoft, Alphabet, Meta, and Oracle will spend 90% of their operating cash flow on AI data centers in 2026, up from a historical average of 40%. At NVIDIA's GTC event this week five more years of growing capital expenditure were predicted.
Alphabet issued a century bond - maturing in 2126 - the first by a tech company since Motorola in 1997. The depreciation math encodes the bet: a five-year payback on $431 billion in AI capex at 60% gross margins requires $180 billion in annual AI revenue. Current AI revenue across the hyperscalers is $35 billion. They are underwriting five-times growth in five years. Not unreasonable on the face of it.
At the same time, the physical supply chain is more fragile than the investment thesis assumes. Taiwan relies on the Middle East for 37% of its liquefied natural gas (LNG) and much of its helium and sulfur - industrial inputs that semiconductor fabs cannot operate without. TSMC's most advanced node capacity is already one of the industry's biggest constraints. Google is signing multi-gigawatt power deals and building its own generation capacity because the grid can't keep up.
These are not abstract risks. Iranian drones hit AWS data centers in the Gulf earlier this year. Eleven million people lost access to basic services. The entire cost equation for sovereign AI infrastructure changed overnight.
So here is the picture as AI approaches public markets: the revenue growth is extraordinary, the infrastructure bet is unprecedented, the governance framework is missing, and the geopolitical assumptions are untested. Public markets are about to be asked to price all of this simultaneously.
There is an optimistic reading. Public markets impose discipline that private markets don't. Quarterly reporting. Audited financials. Independent boards. Analyst coverage. Price discovery. The venture ecosystem has operated without a clearing mechanism for five years - Carta's data this week showed the median 2017-vintage fund still below 1x DPI (distributions to paid in capital) after eight years, with only a third of 2021-vintage funds having returned any capital at all. Public markets are the clearing mechanism. They force truth-telling. They separate paper marks from real value.
VCX's 149% premium suggests investors are hungry for that exposure. But a premium built on OpenAI, Anthropic, and SpaceX is a premium built on exactly the companies whose governance, geopolitics, and infrastructure risks I've just described. The access is real. The question is whether the price reflects the risks.
At this moment these assets are priced somewhere between current value and likely future value. The difference is a premium to current value.
The greater the uncertainty about the future the less the premium will be. In extreme cases they will be priced below current value, if the market believes they are over-priced by late stage buyers. But if OpenAI doubles value annually for the next five years then VCX's price will look modest.
Vinod Khosla - one of the most prominent venture capitalists in the world - posted something this week that stuck with me: "AI will change the labor/capital share of income in favor of capital, so tax structures must rebalance that towards labor. Capitalism is by permission of democracy." He argued for sweeping tax law changes in favor of labor.
He's right. And public markets are where that permission gets tested in real time. Every share price is a vote of confidence - or a withdrawal of it. When Anthropic, OpenAI, and SpaceX file their S-1s, we'll learn something important: not just what these companies are worth, but what the public is willing to accept about how AI power is distributed, governed, and paid for.
The IPO window is open. The question is whether we're ready for what comes through it.