Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Testing and learning structured quantum Hamiltonians (2411.00082v2)

Published 31 Oct 2024 in quant-ph, cs.CC, and cs.DS

Abstract: We consider the problems of testing and learning an unknown $n$-qubit Hamiltonian $H$ from queries to its evolution operator $e{-iHt}$ under the normalized Frobenius norm. We prove: 1. Local Hamiltonians: We give a tolerant testing protocol to decide if $H$ is $\epsilon_1$-close to $k$-local or $\epsilon_2$-far from $k$-local, with $O(1/(\epsilon_2-\epsilon_1){4})$ queries, solving open questions posed in a recent work by Bluhm et al. For learning a $k$-local $H$ up to error $\epsilon$, we give a protocol with query complexity $\exp(O(k2+k\log(1/\epsilon)))$ independent of $n$, by leveraging the non-commutative Bohnenblust-Hille inequality. 2. Sparse Hamiltonians: We give a protocol to test if $H$ is $\epsilon_1$-close to being $s$-sparse (in the Pauli basis) or $\epsilon_2$-far from being $s$-sparse, with $O(s{6}/(\epsilon_22-\epsilon_12){6})$ queries. For learning up to error $\epsilon$, we show that $O(s{4}/\epsilon{8})$ queries suffice. 3. Learning without memory: The learning results stated above have no dependence on $n$, but require $n$-qubit quantum memory. We give subroutines that allow us to learn without memory; increasing the query complexity by a $(\log n)$-factor in the local case and an $n$-factor in the sparse case. 4. Testing without memory: We give a new subroutine called Pauli hashing, which allows one to tolerantly test $s$-sparse Hamiltonians with $O(s{14}/(\epsilon_22-\epsilon_12){18})$ queries. A key ingredient is showing that $s$-sparse Pauli channels can be tolerantly tested under the diamond norm with $O(s2/(\epsilon_2-\epsilon_1)6)$ queries. Along the way, we prove new structural theorems for local and sparse Hamiltonians. We complement our learning results with polynomially weaker lower bounds. Furthermore, our algorithms use short time evolutions and do not assume prior knowledge of the terms in the support of the Pauli spectrum.

Summary

We haven't generated a summary for this paper yet.