Speaker
Description
Classical simulations of quantum algorithms play a pivotal role in the development of quantum computing devices. They are essential both for providing benchmark data for validation and for representing a crucial term of comparison to justify claims of quantum speed-up in the solution of computational problems.
In this study, we investigate the supervised learning of output expectation values of random quantum circuits [1]. Deep convolutional neural networks (CNNs) are trained to predict single-qubit and two-qubit expectation values using databases of classically simulated circuits. These circuits are built using either a universal gate set or a continuous set of rotations plus an entangling gate, and they are represented via properly designed encodings of these gates.
We analyze the prediction accuracy for previously unseen circuits, comparing the performance of our CNNs with small-scale quantum computers available from the free IBM Quantum program. The CNNs often outperform these quantum devices, depending on the circuit depth, on the network depth, and on the training set size.
Notably, our CNNs are designed to be scalable, allowing us to exploit transfer learning and perform extrapolations to circuits larger than those included in the training set. Moreover,these CNNs demonstrate remarkable resilience against noise, remaining accurate even when trained on (simulated) expectation values averaged over very few measurements.
[1] Simone Cantori et al 2023 Quantum Sci. Technol. 8 025022
Abstract category | Quantum Computing |
---|