Amazon has brought a new AI compute backbone online while expanding its Asia strategy. The company said Project Rainier is now fully operational, delivering one of the world’s largest clusters for training and serving advanced models.
In parallel, South Korea’s presidential office announced that Amazon Web Services will invest at least $5 billion in South Korea by 2031, about CAD$6.9 billion, to build new AI data centres near Seoul. Together, the moves highlight how hyperscale AI demand is reshaping capital allocation, site choices, and public incentives.
Anthropic Capacity on Trainium2 Chips
Under Rainier, AWS has aggregated nearly half a million Trainium2 chips across multiple U.S. facilities to accelerate frontier model training, with anchor usage from Anthropic. Amazon says Anthropic will scale to more than one million Trainium2 devices by the end of 2025 for both training and inference on Claude, a material step up from prior compute budgets.
The company frames Rainier as a repeatable pattern for rapid delivery of AI supercomputing capacity. “Project Rainier is one of AWS’s most ambitious undertakings to date,” said Ron Diamant. The operational question now shifts from chip counts to interconnect reliability, thermal performance, and workload scheduling at cluster scale.
Korea Build Signals Regional Strategy
Seoul’s announcement positions AWS to deepen its Northeast Asia footprint, with phased delivery through 2031 on the city’s outskirts. The spend of at least $5 billion in South Korea, about CAD$6.9 billion, complements
June’s plan where AWS joined SK Group on a $4 billion Ulsan AI data centre, about CAD$5.5 billion. Location matters, given latency to Korean users, existing cloud regions, and proximity to key semiconductor suppliers. The initiative also sits within broader APEC commitments. “We’ve invested and committed to investment of an additional $40 billion across 14 non U.S. APEC economies between now and 2028,” Garman said at an APEC event. Execution in Korea will hinge on timely permits, grid connections, and local contracting capacity.
Power, Water, and Procurement Risks
Hyperscale AI clusters live or die by reliable megawatts, high density cooling, and water stewardship, which makes early utility coordination and off site renewables procurement decisive. As a reference point for scale, AWS this year outlined plans to invest at least $11 billion in Georgia, about CAD$15.2 billion, linked to cloud and AI growth, a sign that single state or province commitments can now rival national programmes.
Korea’s build will likely require multi year power delivery agreements, on site substation works, and reclaimed water or adiabatic systems that meet local regulations. Procurement will cut across transformers, switchgear, cooling modules, servers, and fibre, with lead times that favour framework agreements and domestic content where available. The policy trade off is clear. Governments want AI anchored investment and skilled jobs, yet communities will scrutinize energy use, land footprint, and benefits beyond construction.
