Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bandit Quickest Changepoint Detection (2107.10492v3)

Published 22 Jul 2021 in cs.LG, cs.IT, math.IT, and stat.ML

Abstract: Many industrial and security applications employ a suite of sensors for detecting abrupt changes in temporal behavior patterns. These abrupt changes typically manifest locally, rendering only a small subset of sensors informative. Continuous monitoring of every sensor can be expensive due to resource constraints, and serves as a motivation for the bandit quickest changepoint detection problem, where sensing actions (or sensors) are sequentially chosen, and only measurements corresponding to chosen actions are observed. We derive an information-theoretic lower bound on the detection delay for a general class of finitely parameterized probability distributions. We then propose a computationally efficient online sensing scheme, which seamlessly balances the need for exploration of different sensing options with exploitation of querying informative actions. We derive expected delay bounds for the proposed scheme and show that these bounds match our information-theoretic lower bounds at low false alarm rates, establishing optimality of the proposed method. We then perform a number of experiments on synthetic and real datasets demonstrating the effectiveness of our proposed method.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Samaneh Aminikhanghahi and Diane J. Cook “A Survey of Methods for Time Series Change Point Detection” In Knowl. Inf. Syst. 51.2 Berlin, Heidelberg: Springer-Verlag, 2017, pp. 339–367
  2. Ryan Prescott Adams and David J.C. MacKay “Bayesian Online Changepoint Detection”, 2007 arXiv:0710.3742 [stat.ML]
  3. Reda Alami, Odalric Maillard and Raphael Feraud “Restarted Bayesian Online Change-point Detector achieves Optimal Detection Delay” In Proceedings of the 37th International Conference on Machine Learning, Proceedings of Machine Learning Research, 2020
  4. Tim Bass “Multisensor Data Fusion for Next Generation Distributed Intrusion Detection Systems” In In Proceedings of the IRIS National Symposium on Sensor and Data Fusion, 1999, pp. 24–27
  5. Michèle Basseville and Igor V. Nikiforov “Detection of Abrupt Changes: Theory and Application” USA: Prentice-Hall, Inc., 1993
  6. Jie Chen and Arjun K Gupta “Parametric Statistical Change Point Analysis: With Applications to Genetics, Medicine, and Finance; 2nd ed.” Boston: Springer, 2012
  7. Thomas M Cover “Elements of information theory” John Wiley & Sons, 1999
  8. “Distributed Sensing for Quality and Productivity Improvements” In IEEE Transactions on Automation Science and Engineering, 2006
  9. Erhan Baki Ermis and Venkatesh Saligrama “Distributed Detection in Sensor Networks With Limited Range Multimodal Sensors” In IEEE Transactions on Signal Processing 58.2, 2010, pp. 843–858
  10. “On-line inference for multiple changepoint problems” In Journal of the Royal Statistical Society: Series B (Statistical Methodology) 69.4 Wiley, 2007, pp. 589–605
  11. “Optimal Best Arm Identification with Fixed Confidence” In Conference On Learning Theory, 2016, pp. 998–1027
  12. “On upper-confidence bound policies for switching bandit problems” In International Conference on Algorithmic Learning Theory, 2011, pp. 174–188 Springer
  13. Aurélien Garivier, Pierre Ménard and Gilles Stoltz “Explore first, exploit next: The true shape of regret in bandit problems” In Mathematics of Operations Research 44.2 INFORMS, 2019, pp. 377–399
  14. “Active multi-fidelity Bayesian online changepoint detection” In arXiv preprint arXiv:2103.14224, 2021
  15. Alfred O. Hero and Douglas Cochran “Sensor Management: Past, Present, and Future” In IEEE Sensors Journal, 2011
  16. Shogo Hayashi, Yoshinobu Kawahara and Hisashi Kashima “Active Change-Point Detection” In Proceedings of The Eleventh Asian Conference on Machine Learning 101, Proceedings of Machine Learning Research Nagoya, Japan: PMLR, 2019, pp. 1017–1032
  17. “lil’ucb: An optimal exploration algorithm for multi-armed bandits” In Conference on Learning Theory, 2014, pp. 423–439 PMLR
  18. Tze Leung Lai “Information bounds and quick detection of parameter changes in stochastic systems” In IEEE Transactions on Information Theory 44.7 IEEE, 1998, pp. 2917–2929
  19. Fang Liu, Joohyun Lee and Ness Shroff “A change-detection based framework for piecewise-stationary multi-armed bandit problem” In Proceedings of the AAAI Conference on Artificial Intelligence 32.1, 2018
  20. Gary Lorden “Procedures for reacting to a change in distribution” In The Annals of Mathematical Statistics 42.6 Institute of Mathematical Statistics, 1971, pp. 1897–1908
  21. “Bandit algorithms” Cambridge University Press, 2020
  22. Tze Leung Lai and Haipeng Xing “Sequential Change-Point Detection When the Pre- and Post-Change Parameters are Unknown” In Sequential Analysis 29.2 Taylor & Francis, 2010, pp. 162–175
  23. Odalric-Ambrym Maillard “Sequential change-point detection: Laplace concentration of scan statistics and non-asymptotic delay bounds” In Proceedings of the 30th International Conference on Algorithmic Learning Theory 98, Proceedings of Machine Learning Research Chicago, Illinois: PMLR, 2019, pp. 610–632
  24. “Thompson sampling in switching environments with Bayesian online change detection” In Artificial Intelligence and Statistics, 2013, pp. 442–450 PMLR
  25. Michael A. Osborne, Roman Garnett and Stephen J. Roberts “Active Data Selection for Sensor Networks with Faults and Changepoints” In 24th IEEE International Conference on Advanced Information Networking and Applications, AINA 2010, Perth, Australia, 20-13 April 2010 IEEE Computer Society, 2010, pp. 533–540
  26. E.S. Page “Continuous Inspection Schemes” In Biometrika 41.1/2 [Oxford University Press, Biometrika Trust], 1954, pp. 100–115
  27. “MIMII Dataset: Sound Dataset for Malfunctioning Industrial Machine Investigation and Inspection”, 2019 arXiv:1909.09347
  28. “MIMII Dataset: Sound Dataset for Malfunctioning Industrial Machine Investigation and Inspection” Zenodo, 2019 DOI: 10.5281/zenodo.3384388
  29. Jing Qian, Venkatesh Saligrama and Yuting Chen “Connected Sub-graph Detection” In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics 33, Proceedings of Machine Learning Research Reykjavik, Iceland: PMLR, 2014, pp. 796–804
  30. William R Thompson “On the likelihood that one unknown probability exceeds another in view of the evidence of two samples” In Biometrika 25.3/4 JSTOR, 1933, pp. 285–294
  31. Alexander Tartakovsky, Igor Nikiforov and Michele Basseville “Sequential analysis: Hypothesis testing and changepoint detection” CRC Press, 2014
  32. Venugopal V Veeravalli and Taposh Banerjee “Quickest change detection” In Academic Press Library in Signal Processing 3 Elsevier, 2014, pp. 209–255
  33. “Non-Contact Vital Sign Monitoring in the Clinic” In 2017 12th IEEE International Conference on Automatic Face Gesture Recognition (FG 2017), 2017
  34. “Towards Detecting Anomalous User Behavior in Online Social Networks” In USENIX Security Symposium, 2014
  35. Shihao Yang, Mauricio Santillana and S.C. Kou “Accurate estimation of influenza epidemics using Google search data via ARGO” In Proceedings of the National Academy of Sciences National Academy of Sciences, 2015
  36. “Bandit Change-Point Detection for Real-Time Monitoring High-Dimensional Data Under Sampling Control”, 2020 arXiv:2009.11891 [stat.ME]
  37. Did you discuss any potential negative societal impacts of your work? [No] As a primarily theoretical paper concerned with efficiency, our work does not have any meaningful negative impacts on society beyond the larger issues of how it is employed. [2] Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] If you are including theoretical results… 1. Did you state the full set of assumptions of all theoretical results? [Yes] See Section 5.1. Did you include complete proofs of all theoretical results? [Yes] See the supplementary material for all proofs. If you ran experiments… 1. Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] We have included all the code and instructions to run it as part of the supplementary material. The dataset (MIMII audio sounds) used is large, and is available publicly. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] We have specified the details of the experiments and parameter choices in the supplementary material. Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes] We report either standard deviations or empirical distributions in our experiments. Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Section 6. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets… 1. If your work uses existing assets, did you cite the creators? [Yes] See subsection on “Audio based recognition of machine anomalies" in Section 6. Did you mention the license of the assets? [Yes] See the citation to the MIMII audio dataset in Section 6. Did you include any new assets either in the supplemental material or as a URL? [N/A] No new assets were created. Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] We have not used data from people. Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] The only measured data used in the paper are those of machine sounds. If you used crowdsourcing or conducted research with human subjects… 1. Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] 2. Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] 3. Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A] Appendix Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] If you are including theoretical results… 1. Did you state the full set of assumptions of all theoretical results? [Yes] See Section 5.1. Did you include complete proofs of all theoretical results? [Yes] See the supplementary material for all proofs. If you ran experiments… 1. Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] We have included all the code and instructions to run it as part of the supplementary material. The dataset (MIMII audio sounds) used is large, and is available publicly. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] We have specified the details of the experiments and parameter choices in the supplementary material. Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes] We report either standard deviations or empirical distributions in our experiments. Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Section 6. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets… 1. If your work uses existing assets, did you cite the creators? [Yes] See subsection on “Audio based recognition of machine anomalies" in Section 6. Did you mention the license of the assets? [Yes] See the citation to the MIMII audio dataset in Section 6. Did you include any new assets either in the supplemental material or as a URL? [N/A] No new assets were created. Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] We have not used data from people. Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] The only measured data used in the paper are those of machine sounds. If you used crowdsourcing or conducted research with human subjects… 1. Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] 2. Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] 3. Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A] Appendix
  38. Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] If you are including theoretical results… 1. Did you state the full set of assumptions of all theoretical results? [Yes] See Section 5.1. Did you include complete proofs of all theoretical results? [Yes] See the supplementary material for all proofs. If you ran experiments… 1. Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] We have included all the code and instructions to run it as part of the supplementary material. The dataset (MIMII audio sounds) used is large, and is available publicly. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] We have specified the details of the experiments and parameter choices in the supplementary material. Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes] We report either standard deviations or empirical distributions in our experiments. Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Section 6. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets… 1. If your work uses existing assets, did you cite the creators? [Yes] See subsection on “Audio based recognition of machine anomalies" in Section 6. Did you mention the license of the assets? [Yes] See the citation to the MIMII audio dataset in Section 6. Did you include any new assets either in the supplemental material or as a URL? [N/A] No new assets were created. Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] We have not used data from people. Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] The only measured data used in the paper are those of machine sounds. If you used crowdsourcing or conducted research with human subjects… 1. Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] 2. Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] 3. Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A] Appendix
Citations (8)

Summary

We haven't generated a summary for this paper yet.