IBM has announced its new IBM Quantum Nighthawk chip and is keeping its goal of building a fault tolerant quantum computer by 2029. Nighthawk is a 120 qubit processor that is meant to run deeper quantum circuits by improving how the qubits connect and how they are controlled. IBM says clients will be able to use it by late 2025 for workloads with thousands of two qubit gates.
At the same time, IBM is introducing an experimental chip called Loon. This chip is used to test longer range connections and error correction features that future, larger systems will need.
Both Nighthawk and Loon are steps on a roadmap that leads to IBM’s planned Starling system. IBM says Starling will be delivered in 2029 at a new quantum data center in Poughkeepsie. In the near term, IBM is focusing more on consistent, repeatable performance than on simply raising qubit counts.
Roadmap links each chip to a date
In its public plan, IBM says Starling should be able to run 100 million gates on 200 logical qubits, with real time error decoding and a universal instruction set. Loon comes earlier in the roadmap and is used to test C couplers and other building blocks for quantum low density parity check codes. These codes can reduce overhead a lot compared with older surface codes.
Kookaburra, which is planned for 2026, adds a modular memory plus logic design. This design is meant to store and process encoded information across more than one chip. IBM leaders describe the whole effort as an engineering program that ties together devices, control electronics, cryogenic systems, and software into one scalable service platform.
“IBM is charting the next frontier in quantum computing,” said CEO Arvind Krishna. The company still needs stable chip fabrication and more mature software tools to hit its delivery dates.
Data center design and supply chain impacts
Starling is planned to run inside a dedicated quantum data center. In that facility, cryogenic stacks, microwave control hardware, and classical accelerators will sit close together. This tight setup should speed up feedback for decoding logical qubits and reduce bottlenecks between the quantum processor and nearby classical systems.
Building such a data center will require specialized suppliers for dilution refrigerators, high frequency electronics, optical links, and vacuum packaging. IBM also pointed to advanced manufacturing at the Albany NanoTech Complex for its Loon test chip. That signals a closer link between research labs and production lines.
IBM imagines a quantum enhanced high performance computing setup, where error corrected quantum systems sit next to supercomputers. Together, they would run much deeper circuits than today’s noisy quantum devices can handle. Data center operators will still need to watch power use, cooling, and floor loading, even if quantum systems fit into smaller racks than many classical clusters.
What buyers and procurement teams should watch
A 2029 target gives buyers time to move from small pilot projects to phased adoption that tracks clear error correction milestones. Benchmarking and independent checks become more important as customers look for metrics that link directly to algorithms, runtimes, and total cost of ownership. IBM moved to Stage B in the U.S. Defense Advanced Research Projects Agency (DARPA) Quantum Benchmarking Initiative on 6 November 2025. This adds a third party review to IBM’s roadmap. The program is meant to test whether different approaches can scale to useful, industry grade fault tolerance by the early 2030s.
“IBM’s progression to Stage B of DARPA’s Quantum Benchmarking Initiative is a firm validation of IBM’s approach,” said Jay Gambetta.
Procurement teams will likely start to write contracts that focus on verifiable circuit depth, decoder performance, and service level guarantees, instead of just counting qubits.
