Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Proper vs Improper Quantum PAC learning (2403.03295v1)

Published 5 Mar 2024 in quant-ph, cs.CC, and cs.LG

Abstract: A basic question in the PAC model of learning is whether proper learning is harder than improper learning. In the classical case, there are examples of concept classes with VC dimension $d$ that have sample complexity $\Omega\left(\frac d\epsilon\log\frac1\epsilon\right)$ for proper learning with error $\epsilon$, while the complexity for improper learning is O$!\left(\frac d\epsilon\right)$. One such example arises from the Coupon Collector problem. Motivated by the efficiency of proper versus improper learning with quantum samples, Arunachalam, Belovs, Childs, Kothari, Rosmanis, and de Wolf (TQC 2020) studied an analogue, the Quantum Coupon Collector problem. Curiously, they discovered that for learning size $k$ subsets of $[n]$ the problem has sample complexity $\Theta(k\log\min{k,n-k+1})$, in contrast with the complexity of $\Theta(k\log k)$ for Coupon Collector. This effectively negates the possibility of a separation between the two modes of learning via the quantum problem, and Arunachalam et al.\ posed the possibility of such a separation as an open question. In this work, we first present an algorithm for the Quantum Coupon Collector problem with sample complexity that matches the sharper lower bound of $(1-o_k(1))k\ln\min{k,n-k+1}$ shown recently by Bab Hadiashar, Nayak, and Sinha (IEEE TIT 2024), for the entire range of the parameter $k$. Next, we devise a variant of the problem, the Quantum Padded Coupon Collector. We prove that its sample complexity matches that of the classical Coupon Collector problem for both modes of learning, thereby exhibiting the same asymptotic separation between proper and improper quantum learning as mentioned above. The techniques we develop in the process can be directly applied to any form of padded quantum data. We hope that padding can more generally lift other forms of classical learning behaviour to the quantum setting.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. A survey on the complexity of learning quantum states. Nature Reviews Physics, 6(1):59–69, December 2023.
  2. Quantum Coupon Collector. In Steven T. Flammia, editor, 15th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2020), volume 158 of Leibniz International Proceedings in Informatics (LIPIcs), pages 10:1–10:17, Dagstuhl, Germany, 2020. Schloss Dagstuhl–Leibniz-Zentrum für Informatik.
  3. Guest column: A survey of Quantum Learning Theory. SIGACT News, 48(2):41–67, June 2017.
  4. Optimal quantum sample complexity of learning algorithms. Journal of Machine Learning Research, 19(1):2879–2878, January 2018.
  5. Learnability and the Vapnik-Chervonenkis dimension. Journal of the ACM, 36(4):929–965, October 1989.
  6. Learning DNF over the uniform distribution using a quantum example oracle. SIAM Journal on Computing, 28(3):1136–1153, 1998.
  7. User “cardinal” (https://stats.stackexchange.com/users/2970/cardinal). What is a tight lower bound on the coupon collector time? Available at https://stats.stackexchange.com/q/7917.
  8. Optimal lower bounds for quantum learning via information theory. IEEE Transactions on Information Theory, 70(3):1876–1896, March 2024.
  9. Steve Hanneke. The optimal sample complexity of PAC learning. Journal of Machine Learning Research, 17(38):1–15, January 2016.
  10. Steve Hanneke. Proper PAC learning VC dimension bounds. Available at https://cstheory.stackexchange.com/questions/40161/proper-pac-learning-vc-dimension-bounds, July 11, 2019.
  11. Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301):13–30, 1963.
  12. Probability and Computing: Randomization and Probabilistic Techniques in Algorithms and Data Analysis. Cambridge University Press, second edition, 2017.
  13. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, Cambridge, UK, 2014.
  14. Leslie G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984.
  15. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16(2):264–280, 1971.
  16. John Watrous. The Theory of Quantum Information. Cambridge University Press, May 2018.

Summary

We haven't generated a summary for this paper yet.