IBM chief executive Arvind Krishna is questioning the economics behind the current flood of artificial intelligence data centres. In a wide ranging interview, he said the industry’s capital outlay is racing ahead of realistic cash flows.
He made the remarks on the Decoder podcast, a conversation that set out rough cost maths and timelines for replacing equipment that ages fast. The discussion landed as hyperscalers boost spending and power demand continues to climb.
Krishna’s central claim rests on simple arithmetic. If industry commitments reach about 100 gigawatts of AI capacity, the tally could near $8 trillion (C$10.8 trillion) in capital costs. He argued that the profits needed to service such a sum are unlikely at today’s prices.
“There’s no way you’re going to get a return on that,” said Arvind Krishna. He added that current enthusiasm risks outrunning credible revenue models.
Napkin math tests hyperscaler economics
The interview broke down unit costs in plain terms. Krishna estimated a one gigawatt data centre, fully built and equipped, at roughly $80 billion (C$108 billion). If a single platform aims for 20 to 30 gigawatts, that implies about $1.5 trillion (C$2.0 trillion) of capital. All players together, he said, could push the total near $8 trillion (C$10.8 trillion). The framing places scale, price, and timing on the same page.
Hardware lifecycles deepen the strain. Generations of accelerators and memory arrive faster than conventional refresh plans. Krishna warned that fleets must turn over within five years to stay competitive. “You’ve got to use it all in five years,” he said, a timeline that compresses payback windows and magnifies risk, as reported by DataCenterDynamics.
Capital costs meet rapid depreciation
The five year cycle tightens every link in the chain, from grid connections to cooling plants. Financing structures must carry heavy depreciation while power contracts firm up supply.
Interest costs add further pressure when buildouts run ahead of realized demand. In practice, those factors push operators to either raise prices or slow capacity ramps. Neither path guarantees the profit cushion Krishna says is required.
Even so, he did not dismiss enterprise AI. Krishna described strong productivity gains for targeted workloads, while doubting that bigger models alone solve the gap. He also pointed to complementary paths, including quantum systems, that could shift the cost curve over time. The interview format, on the Decoder podcast, stressed pragmatism more than prediction. The message, in short, asked builders to match ambition with audited economics.
For infrastructure planners, the takeaway is sober and immediate. Power and land can be secured, but repayment depends on utilisation and price discipline. Procurement models may need clearer off‑ramps if technology cycles compress further. Public agencies will weigh interconnection priorities against broader industrial loads. Investors and operators, meanwhile, face the same test, make the math work before pouring more concrete.
