Wright’s Law – why fast feedback is essential

Traditionally Moore’s Law has been the guiding metric for the pace of technological advancement of computer chips. While this correlation between performance increase and time still seems applicable to the semicon industry and might become relevant for the quantum chip industry, we think another framework is more suited in this early stage of Quantum Processing Units (QPU). In Wright’s Law the learning curve is measured in cost per unit and only depends on the cumulative output, removing the uncertainty about the time needed to produce the units. Also referred to as the ‘experience curve’, this perspective placed in the context of quantum computing highlights that the key metrics for improving QPU performance are a shorter cycle times and increased throughput of devices, in other words: iterate faster.


In its revised form, Moore’s Law from 1975 has been very successful in predicting that the number of transistors on a microchip would double approximately every two years. This is achieved by increasing the number of transistors per device. However, the underlying assumption is that there will remain a steady increase in demand and capacity for manufacturing.

A more general but less known framework was pioneered by Theodore Wright as early as 1936. He forecasts the cost decline solely as a function of cumulative production output, agnostic of the time needed to reach it. It captures the ‘learning curve’ of the industry. Wright’s Law suggests that for every doubling of cumulative production, there is a predictable and systematic reduction in costs and increase in production efficiency. It does describe more than 100 years of auto production cost decrease correctly (Source: ARK Invest) and combines the effects of economies of scale and learning through experience in one metric.

Despite the differences in the formulation both models capture the same phenomena of learning through iterations. Broadly speaking, experience refers to the number of iterations performed. In this context we refer to time-to-data as a measure for the delay between fabricating a new device and availability of test results. The task of test equipment is to reduce the time-to-data as much as possible, so that new designs or processes can be validated faster. There are two natural levers to accelerate the device learning: increase the volume / throughput (parallelization) or reduce the cycle time.

A schematic depiction of the development cycle for Quantum Processing Units is shown in the figure below. In each block there are different ways to accelerate cycle times, and each faces its own challenges regarding scalability. For quantum chips that require cryogenic electrical testing the characterization is especially challenging, since it cannot yet be done in-line.

Schematic workflow and development cycle for Quantum Processing Units.

Schematic workflow and development cycle for Quantum Processing Units. Source: Gao, et al. PRXQ (2021)

The fabrication of solid-state qubits based on superconducting circuits or electron spins in silicon, can in large parts be derived from CMOS-compatible processes. Transferring these practices to Quantum is far from trivial, but in terms of tooling for the wafer processing one can draw on the years of experience in semiconductor manufacturing. However, for the device characterization and end of line testing the throughput is not as easy to scale. Fundamental differences to classical compute chips such as requirements for cryogenic temperatures and the need for qubit calibration hinder a direct technology transfer. The throughput for wafer-scale production processes exceeds capacities for cryogenic testing by multiple orders of magnitude.

To coherently accelerate learning along the lines of Wright’s Law, the industry therefore needs a tool to ‘put in qubits and get out data’. To address the industrial use case of testing equipment, OrangeQS developed and industry-level test system. At its product launch by end of this year we will share how it extracts the chip quality parameters of a 100+ qubit chip within less than 10 days.

In summary, we see that quantum computing is still a nascent industry and since it is not clear how many qubits a useful quantum computer needs we can’t predict what production volumes will look like in the future. With that in mind it could be viable to identify a derivative of Wright’s Law instead of searching for a Moore’s Law for quantum (Source: Olivier Ezratty on Arxiv). Independent of any quantitive metric governing the progress, it is clear that fabricating and testing more devices can help making quantum computing a reality earlier.

Since testing is currently one of the key limiting factors in the development cycle, OrangeQS is dedicated to develop utility-scale quantum chip test equipment, with throughputs on par to current QPU fabrication sites. Make sure to stay tuned for future updates on our industry-level test equipment.