Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Quantum State Tomography with Active Learning (2203.15719v6)

Published 29 Mar 2022 in quant-ph, cond-mat.dis-nn, and cond-mat.quant-gas

Abstract: Recently, tremendous progress has been made in the field of quantum science and technologies: different platforms for quantum simulation as well as quantum computing, ranging from superconducting qubits to neutral atoms, are starting to reach unprecedentedly large systems. In order to benchmark these systems and gain physical insights, the need for efficient tools to characterize quantum states arises. The exponential growth of the Hilbert space with system size renders a full reconstruction of the quantum state prohibitively demanding in terms of the number of necessary measurements. Here we propose and implement an efficient scheme for quantum state tomography using active learning. Based on a few initial measurements, the active learning protocol proposes the next measurement basis, designed to yield the maximum information gain. We apply the active learning quantum state tomography scheme to reconstruct different multi-qubit states with varying degree of entanglement as well as to ground states of the XXZ model in 1D and a kinetically constrained spin chain. In all cases, we obtain a significantly improved reconstruction as compared to a reconstruction based on the exact same number of measurements and measurement configurations, but with randomly chosen basis configurations. Our scheme is highly relevant to gain physical insights in quantum many-body systems as well as for benchmarking and characterizing quantum devices, e.g. for quantum simulation, and paves the way for scalable adaptive protocols to probe, prepare, and manipulate quantum systems.

Citations (13)

Summary

We haven't generated a summary for this paper yet.