Anthropic’s CFO, Krishna Rao, reveals in his first podcast appearance with Patrick O’Shaughnessy the unprecedented economics, cultural philosophy, and strategic gambles powering one of the fastest-growing enterprises in history.
When Krishna Rao joined Anthropic two years ago as the company closed its Series D, the AI lab had roughly $250 million in run-rate revenue. The plan, he was told, was to reach $1 billion. His first instinct was to ask the question any seasoned finance executive would: "In what year?"
That question, he now admits, exposed exactly the kind of linear thinking he would have to unlearn.
Anthropic began 2026 with approximately $9 billion in run-rate revenue. By the end of the first quarter, that figure had crossed $30 billion, a more than threefold leap in roughly four months. It is one of the steepest revenue curves in corporate history, and according to Rao, it is not an anomaly. It is the thesis.
The Compute Question That Consumes Everything
Ask Rao what occupies his time, and the answer is unambiguous: compute. He estimates he spends 30 to 40 percent of his working hours on it.
"The compute that we procure is the lifeblood of our business," Rao said. "It's the canvas on which everything else gets built."
The math is brutal in both directions. Buy too much compute, and the company goes out of business. Buy too little, and Anthropic falls off the frontier and cannot serve its customers. Compute cannot be procured on short notice, a gigawatt of capacity is not a next-day delivery. Decisions made today shape the company's competitive position eighteen months from now.
To navigate this, Anthropic operates across three chip platforms: Amazon's Trainium, Google's TPUs, and Nvidia's GPUs. The company is, by Rao's account, the only frontier language lab using all three. That flexibility was hard-won, built over years of investment in compilers and orchestration infrastructure that allow workloads to move fungibly across architectures.
The scale of recent commitments underscores the stakes. Last month, Anthropic signed a five-gigawatt deal with Google and Broadcom for TPUs starting in 2027, plus a separate Amazon Trainium agreement for up to five additional gigawatts commitments exceeding $100 billion. A new partnership with SpaceX's Colossus facility in Memphis, announced just hours before the conversation, adds near-term capacity primarily for consumer and prosumer demand.
The "Cone of Uncertainty"
Rao describes Anthropic's planning framework as a "cone of uncertainty" a range of scenarios over a one- to two-year horizon from which the company works backward. The challenge, he explains, is that small variations in monthly growth rates compound into wildly different outcomes when a business is moving exponentially.
"Humans mostly think linearly," Rao said. "That's a paradigm I've had to break for myself."
Internally, compute is allocated across three categories: model development and research, internal use by employees, and serving customers. There is a hard floor on research compute that the company will not breach, even when customer demand strains the system. Rao notes that the compute allocated to employees alone could generate billions in revenue if redirected but the productivity gains from internal use accelerate model development itself.
Why the Frontier Pays
A core conviction at Anthropic is that the returns to being at the frontier of intelligence are extraordinarily high particularly in enterprise.
Rao rejects the framing of model intelligence as a single IQ-style score. New generations, he argues, deliver multi-dimensional gains: stronger long-horizon task performance, better tool use, faster agentic execution. A model that completes a task in a day rather than a week is effectively seven times more valuable, even if raw capability appears similar on benchmarks.
Equally important is efficiency. The car analogy breaks down, Rao said: moving from a sedan to a sports car typically means worse fuel economy, but each generation of Claude from Opus 4 to 4.5, 4.6, and now 4.7—has delivered both capability gains and meaningful efficiency multipliers in token processing. That efficiency feeds reinforcement learning loops, customer serving costs, and margins simultaneously.
The result is what Rao calls a Jevons paradox in action. When Anthropic cut the price of the Opus family at the launch of Opus 4.5 because efficiency gains made it economical and because customers were underutilizing the tier consumption rose far more than the price reduction would have predicted.
A Research Lab That Happens to Sell Software
Anthropic now counts nine of the Fortune 10 as customers. Net dollar retention runs above 500 percent on an annualized basis. Rao mentioned signing two double-digit-million-dollar commitments during a twenty-minute car ride to the interview.
Yet Rao insists the company's identity remains that of a research lab. Roughly 90 percent of code at Anthropic is now written by Claude Code, and the same models that ship to customers accelerate the development of the next generation a recursive self-improvement loop the company is investing heavily to maintain.
The strategy is predominantly horizontal. Anthropic builds the platform; customers build the businesses. Vertical extensions Claude Code, Claude for Financial Services, Claude for Life Sciences—exist primarily where the company can demonstrate model capability ahead of the market or showcase patterns for the ecosystem to emulate.
Asked about customers who fear Anthropic as a competitor, Rao acknowledged the tension. Capabilities sometimes surprise even Anthropic itself, he said, and the speed of change compresses what would historically have been decades of disruption into months. The company's response has been to lean into partnership: early access programs, deep customer collaboration, and ecosystem-oriented product launches.
The Safety-Commercial Flywheel
One of the more counterintuitive findings of Anthropic's commercial journey, according to Rao, is that its investments in interpretability and alignment research originally pursued for mission reasons have become a competitive advantage in enterprise sales.
The largest companies in the world are entrusting Claude with sensitive data and customer-facing workflows. A company that can demonstrate it understands what is happening inside its models becomes the trustworthy choice. "That's not why we invested in it," Rao said, "but it did have this downstream effect."
The Mythos Moment
The release of Mythos marked a turning point. For the first time, Anthropic deployed a phased release strategy because of capabilities that spiked in cyber domains where the same model that found 22 vulnerabilities in an open-source codebase using a prior generation found 250 with Mythos.
The model can be used defensively to patch code at scale, or offensively. Anthropic's response was not to withhold it but to release it gradually to a controlled group, working closely with government partners. Rao framed this as a template for handling future capability jumps: not a refusal to ship, but a recognition that responsibility must scale with power.
Culture as Moat
When Meta and others made aggressive talent offers across the frontier labs, Anthropic reportedly lost two researchers. Other labs lost dozens.
Rao attributes retention to a culture built around collaboration, intellectual honesty, and what the company calls "talent density over talent mass." All seven co-founders remain. The vast majority of the first thirty employees remain. Every candidate, regardless of brilliance, must pass a culture interview—and the bar is real.
The company runs on transparency. CEO Dario Amodei addresses the entire company every two weeks, takes unscripted questions, and writes regularly to employees. Decisions are debated rigorously and, once made, supported without second-guessing.
What Could Go Wrong
Pressed on what would push Anthropic toward the bottom of its cone of uncertainty rather than the top, Rao identified three risks: customer diffusion rates failing to keep pace with model capability, scaling laws unexpectedly flattening, and competitive pressure eroding Anthropic's position at the frontier of agentic AI.
None of these, he emphasized, is guaranteed not to happen.
The Optimistic Case
What Rao returns to, when asked what excites him most, is biomedicine. The prospect that diseases diagnosed today as incurable might become treatable within a patient's lifetime because AI compresses drug discovery timelines by orders of magnitude is the outcome he finds most worth the capital, the compute, and the uncertainty.
It is also, perhaps, the answer to the broader question of why a technology that polls poorly with the general public deserves the investment it is receiving. The industry, Rao concedes, has work to do in articulating both the promise and the risks honestly.
"If somebody's just telling me all the good news and none of the bad news," he said, "do I really trust this perspective?"
For now, the cone of uncertainty keeps widening, the compute keeps arriving in layered tranches, and the revenue keeps compounding faster than linear minds expect. Whether that pattern holds is, in some sense, the most consequential open question in technology today.
Original Article
(Disclaimer – This post is auto-fetched from publicly available RSS feeds. Original source: Yourstory. All rights belong to the respective publisher.)