OpenAI signed a seven-year, $38 billion (C$53 billion) deal with Amazon Web Services to run core AI workloads at scale. The agreement gives OpenAI immediate access to AWS infrastructure, with all planned capacity targeted to be online by the end of 2026 and room to expand into 2027.
The companies say the arrangement will support both training and inference for flagship products. Beyond capacity, the pact formalizes a procurement channel for specialised chips and high-density clusters. Investors responded quickly as Amazon shares hit an all-time high after the announcement, underscoring the market’s view that the cloud leader remains competitive in AI.
Scale, Chips, and Timelines
AWS will cluster Nvidia’s GB200 and GB300 accelerators in EC2 UltraServers to deliver low latency, large scale training and inference for frontier models. “Scaling frontier AI requires massive, reliable compute,” said Sam Altman.
The partners frame the deployment as immediately usable, then progressively scaled as OpenAI’s agentic workloads and context sizes grow. Technical choices matter here, because training windows and model releases increasingly hinge on multi-region cluster availability within precise power and networking envelopes. The design aims to reduce bottlenecks while keeping options to add capacity as models and datasets expand.
Diversifying Beyond Microsoft Azure
The AWS pact follows OpenAI’s restructuring and a new definitive agreement with Microsoft that resets the long-standing relationship on governance, technology access, and commercial terms.
That update clarifies rights and signals a more flexible, multi-partner infrastructure model for OpenAI’s next phase. Reuters reports that Microsoft no longer holds first right of refusal on computing services as OpenAI diversifies providers.
The same reporting points to a substantial Azure services commitment alongside a reported Oracle capacity deal and a separate decision to tap Google Cloud for incremental workload headroom. The cumulative effect reduces single-vendor risk, expands negotiating leverage on price and delivery, and spreads geopolitical and supply-chain exposure.
What It Means
For AWS, this contract validates large capital outlays into chips, data centres, and power, strengthening its pitch against Microsoft and Google for AI compute leadership. “AWS’s best-in-class infrastructure will serve as a backbone for their AI ambitions,” said Matt Garman.
The deal underlines a continuing concentration of AI compute among a few hyperscalers, with downstream implications for electricity planning, grid interconnections, and siting. If execution holds, the agreement locks in predictable utilisation for new clusters, which can improve financing terms for power and networking expansions.
It also dovetails with OpenAI’s separate plan to deploy at least 10 gigawatts of AI data centre capacity in the United States, a signal that compute procurement is converging with energy and industrial policy.
