Measuring Quantum Computer Power With IBM Quantum Volume …

If you cant measure it, you cant improve it. IBM created the Quantum Volume metric to measure the power of quantum computers.Quantum Computers have the potential to be vastly more powerful than regular computers.IBM created a Quantum Volume Metric to integrate all of the factors that effect the processing capability of quantum computers.IBM recently updated its Quantum Volume metric from an earlier definition.The single-number metric, quantum volume, can be measured using a concrete protocol on near-term quantum computers of modest size (less than 50 qubits) and measure it on several state-of-the-art transmon devices, finding values as high as 8. The quantum volume is linked to system error rates, and is empirically reduced by uncontrolled interactions within the system. It quantifies the largest random circuit of equal width and depth that the computer successfully implements. Quantum computing systems with high-fidelity operations, high connectivity, large calibrated gate sets, and circuit rewriting toolchains are expected to have higher quantum volumes. The quantum volume is a pragmatic way to measure and compare progress toward improved system-wide gate error rates for near-term quantum computation and error-correction experiments.Quantum volume is architecture independent, and can be applied to any system that is capable of running quantum circuits. We implement this metric on several IBM Q devices, and find a quantum volume as high as 8. We conjecture that systems with higher connectivity will have higher quantum volume given otherwise similar performance parameters.From numerical simulations for a given connectivity, IBM found that there are two possible paths for increasing the quantum volume. Although all operations must improve to increase the quantum volume, the first path is to prioritize improving the gate fidelity above other operations, such as measurement and initialization. This sets the roadmap for device performance to focus on the errors that limit gate performance, such as coherence and calibration errors. The second path stems from the observation that, for these devices and this metric, circuitoptimization is becoming important. They implemented various circuit optimization passes (far from optimal) and showed a measurable change in the experimental performance. IBM introduced an approximate method for NISQ devices, and used it to show experimental improvements.IBM has determined that their quantum devices are close to being fundamentally limited by coherence times, which for IBM Q System One averages 73 microseconds.SOURCES- IBM Research, Arxiv Validating quantum computers using randomized model circuitsWritten By Brian Wang

Read more here:
Measuring Quantum Computer Power With IBM Quantum Volume ...

Related Posts

Comments are closed.