A quantum chip test time analysis

The industry standard for quantifying single-qubit error rates and corresponding qubit fidelities in transmon qubits is the single-qubit randomized benchmarking protocol. In our recent SPIE publication, we employ this protocol to assess the time required for testing and extracting fidelity values for individual qubits.

The outcome serves as a baseline for establishing a targeted test time for an integrated test system when extracting error rates of all qubits on a 100-qubit quantum processor.

Based on this extrapolative assessment, starting with chip insertion and cool-down, and ending with the retrieval of the quantum chip under test, a complete test duty cycle is estimated to take about a week. Considering the current state of technology, such a test cadence in quantum chip testing can be considered high throughput.

The acceleration of quantum chip development cycles requires novel approaches in automated testing and metrology to keep the development cadence high while increasing the qubit count even further. These insights were presented by our director D&E Dr. Thorsten Last at the SPIE Advanced Lithography + Patterning in February this year and is laid out in this paper.

Single-qubit randomized benchmarking of flux-tunable transmons

Presentation by dr. Thorsten Last at SPIE Advanced Lithography and Patterning on Single-qubit randomized benchmarking of flux-tunable transmons


Obtaining the key performance metrics of transmon qubits in an efficient way is an intricate challenge for several reasons. Most importantly, it is necessary to cool down the quantum chip to millikelvin temperatures to get dependable noise-free results. Preceding the cooldown is the wiring and sample exchange, which becomes increasingly complex since the number of cables scales linearly with the number of qubits. Depending on the specific architecture the qubits can be operated using a range of parameters and need to be tuned before measuring the quantities of interest is possible. Increasing the temperatures or skipping steps in the tune up can significantly accelerate testing times but will jeopardize the validity of the results. While clear standards are still pending, we define testing as the workflow containing all the described and necessary steps.

The single-qubit randomized benchmarking (RB) protocol

The core idea behind the randomized benchmarking experiment entails applying a random set of quantum gates to some predefined initial state. Each gate corresponds to an elementary building block of the arbitrary quantum operation. To construct a normalized measure of the degree of errors present during the application of gates, we require that in the absence of any errors the initial state should be unchanged. This can be achieved by applying the inverse of the total sequence of gates at the end of the protocol.

In practice, each operation will have some associated error depending on the time step. The effect of the errors is quantified by the degree of the deviation of the final state from the initial state. Mathematically, it is represented by the overlap between the initial state and the output state vectors – also called the fidelity of the state.

By randomly sampling and averaging over many sequences, we can calculate the average cumulative error of a gate sequence of fixed length – rather than errors specific to separate gates. The fidelity of the final state then offers a qualitative measure of such a cumulative error. The metric, hence, characterizes a “collective” error rate and has an exponential dependence on the length of the gate sequence.

The test time analysis

One important aspect in the reported test time analysis is the fact that a qubit first needs to be prepared before a randomized benchmarking protocol can be executed. Here, in this example a total of 19 tune-up steps are used before executing the RB protocol, resulting in a test time of about 20 minutes per qubit, of which measurement up to coherence times took 8 minutes. The workflow and time breakdown analysis of the RB protocol is shown in the figure below.

Time breakdown analysis for the Randomized benchmarking sequence of transmon qubits

A chart displaying the time breakdown of the RB protocol procedure together with the calibration of the qubit



Extrapolated from this workflow, testing single-qubit fidelities on a 100-qubit scale quantum chip would take about 2.5 days in addition to the cryogenic cycle. Moreover, a complete cycle, involving the insertion of the hecto-qubit quantum processor into the system, conducting tests, and subsequently retrieving it, is estimated to take about a week.

This estimation provides valuable insights into the number of quantum processors that can be effectively tested within a set development cycle, up to the single-qubit RB level.

The findings highlight the need for dedicated utility-scale equipment to test current quantum processors, which integrate sample handling, cryogenic environment, room temperature control, diagnostics protocols, and automatization algorithms. ​OrangeQS is currently the only company providing automated test equipment capable of testing 100+ qubit chips at millikelvin temperatures.

Find the full SPIE publication here.