Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
121 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ChemTime: Rapid and Early Classification for Multivariate Time Series Classification of Chemical Sensors (2312.09871v1)

Published 15 Dec 2023 in cs.LG and q-bio.QM

Abstract: Multivariate time series data are ubiquitous in the application of machine learning to problems in the physical sciences. Chemiresistive sensor arrays are highly promising in chemical detection tasks relevant to industrial, safety, and military applications. Sensor arrays are an inherently multivariate time series data collection tool which demand rapid and accurate classification of arbitrary chemical analytes. Previous research has benchmarked data-agnostic multivariate time series classifiers across diverse multivariate time series supervised tasks in order to find general-purpose classification algorithms. To our knowledge, there has yet to be an effort to survey machine learning and time series classification approaches to chemiresistive hardware sensor arrays for the detection of chemical analytes. In addition to benchmarking existing approaches to multivariate time series classifiers, we incorporate findings from a model survey to propose the novel \textit{ChemTime} approach to sensor array classification for chemical sensing. We design experiments addressing the unique challenges of hardware sensor arrays classification including the rapid classification ability of classifiers and minimization of inference time while maintaining performance for deployed lightweight hardware sensing devices. We find that \textit{ChemTime} is uniquely positioned for the chemical sensing task by combining rapid and early classification of time series with beneficial inference and high accuracy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. M. Rana, P. Chowdhury et al., “Modern applications of quantum dots: Environmentally hazardous metal ion sensing and medical imaging,” in Handbook of Nanomaterials for Sensing Applications.   Elsevier, 2021, pp. 465–503.
  2. M. S. Wiederoder, E. C. Nallon, M. Weiss, S. K. McGraw, V. P. Schnee, C. J. Bright, M. P. Polcha, R. Paffenroth, and J. R. Uzarski, “Graphene nanoplatelet-polymer chemiresistive sensor arrays for the detection and discrimination of chemical warfare agent simulants,” ACS sensors, vol. 2, no. 11, pp. 1669–1678, 2017.
  3. M. Weiss, M. S. Wiederoder, R. C. Paffenroth, E. C. Nallon, C. J. Bright, V. P. Schnee, S. McGraw, M. Polcha, and J. R. Uzarski, “Applications of the kalman filter to chemical sensors for downstream machine learning,” IEEE Sensors Journal, vol. 18, no. 13, pp. 5455–5463, 2018.
  4. A. M. Moore, R. C. Paffenroth, K. T. Ngo, and J. R. Uzarski, “Chemvise: Maximizing out-of-distribution chemical detection with the novel application of zero-shot learning,” arXiv preprint arXiv:2302.04917, 2023.
  5. E. J. Pacsial-Ong and Z. P. Aguilar, “Chemical warfare agent detection: a review of current trends and future perspective,” Frontiers in Bioscience-Scholar, vol. 5, no. 2, pp. 516–543, 2013.
  6. A. M. Moore, R. C. Paffenroth, K. T. Ngo, and J. R. Uzarski, “Acgans improve chemical sensors for challenging distributions,” International Conference on Machine Learning and Applications 978-1-6654-6283-9/22 ©2022 IEEE DOI 10.1109/ICMLA55696.2022.00047, 2022.
  7. E. C. Nallon, V. P. Schnee, C. J. Bright, M. P. Polcha, and Q. Li, “Discrimination enhancement with transient feature analysis of a graphene chemical sensor,” Analytical chemistry, vol. 88, no. 2, pp. 1401–1406, 2016.
  8. A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. A. Ranzato, and T. Mikolov, “Devise: A deep visual-semantic embedding model,” in Advances in Neural Information Processing Systems, C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Weinberger, Eds., vol. 26.   Curran Associates, Inc., 2013. [Online]. Available: https://proceedings.neurips.cc/paper/2013/file/7cce53cf90577442771720a370c3c723-Paper.pdf
  9. A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, M. Chen, and I. Sutskever, “Zero-shot text-to-image generation,” 2021. [Online]. Available: https://arxiv.org/abs/2102.12092
  10. G. B. Goh, C. Siegel, A. Vishnu, N. O. Hodas, and N. Baker, “Chemception: A deep neural network with minimal chemistry knowledge matches the performance of expert-developed qsar/qspr models,” 2017. [Online]. Available: https://arxiv.org/abs/1706.06689
  11. R. Madan and P. S. Mangipudi, “Predicting computer network traffic: A time series forecasting approach using dwt, arima and rnn,” in 2018 Eleventh International Conference on Contemporary Computing (IC3), 2018, pp. 1–5.
  12. J. Lin, E. Keogh, A. Fu, and H. Van Herle, “Approximations to magic: Finding unusual medical time series,” in 18th IEEE Symposium on Computer-Based Medical Systems (CBMS’05).   IEEE, 2005, pp. 329–334.
  13. P. Jönsson and L. Eklundh, “Timesat—a program for analyzing time-series of satellite sensor data,” Computers & geosciences, vol. 30, no. 8, pp. 833–845, 2004.
  14. L. Li, Q. Chang, G. Xiao, and S. Ambani, “Throughput bottleneck prediction of manufacturing systems using time series analysis,” Journal of Manufacturing Science and Engineering, vol. 133, no. 2, 2011.
  15. I. M. Semenick Alam and R. C. Sickles, “Time series analysis of deregulatory dynamics and technical efficiency: the case of the us airline industry,” International Economic Review, vol. 41, no. 1, pp. 203–218, 2000.
  16. A. Gupta, H. P. Gupta, B. Biswas, and T. Dutta, “Approaches and applications of early classification of time series: A review,” IEEE Transactions on Artificial Intelligence, vol. 1, no. 1, pp. 47–61, 2020.
  17. A. Bagnall, J. Lines, A. Bostrom, J. Large, and E. Keogh, “The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances,” Data mining and knowledge discovery, vol. 31, no. 3, pp. 606–660, 2017.
  18. Z. Xing, J. Pei, and P. S. Yu, “Early classification on time series,” Knowledge and information systems, vol. 31, no. 1, pp. 105–127, 2012.
  19. A. P. Ruiz, M. Flynn, J. Large, M. Middlehurst, and A. Bagnall, “The great multivariate time series classification bake off: a review and experimental evaluation of recent algorithmic advances,” Data Mining and Knowledge Discovery, vol. 35, no. 2, pp. 401–449, 2021.
  20. A. Bostrom and A. Bagnall, “Binary shapelet transform for multiclass time series classification,” in Transactions on Large-Scale Data-and Knowledge-Centered Systems XXXII.   Springer, 2017, pp. 24–46.
  21. G. Ottervanger, M. Baratchi, and H. H. Hoos, “Multietsc: automated machine learning for early time series classification,” Data Mining and Knowledge Discovery, vol. 35, no. 6, pp. 2602–2654, 2021.
  22. B. Zhao, H. Lu, S. Chen, J. Liu, and D. Wu, “Convolutional neural networks for time series classification,” Journal of Systems Engineering and Electronics, vol. 28, no. 1, pp. 162–169, 2017.
  23. A. Dempster, F. Petitjean, and G. I. Webb, “Rocket: exceptionally fast and accurate time series classification using random convolutional kernels,” Data Mining and Knowledge Discovery, vol. 34, no. 5, pp. 1454–1495, 2020.
  24. C. H. Lubba, S. S. Sethi, P. Knaute, S. R. Schultz, B. D. Fulcher, and N. S. Jones, “catch22: Canonical time-series characteristics,” Data Mining and Knowledge Discovery, vol. 33, no. 6, pp. 1821–1852, 2019.
  25. M. Löning, A. Bagnall, S. Ganesh, V. Kazakov, J. Lines, and F. J. Király, “sktime: A unified interface for machine learning with time series,” arXiv preprint arXiv:1909.07872, 2019.
  26. M. Middlehurst, J. Large, and A. Bagnall, “The canonical interval forest (cif) classifier for time series classification,” in 2020 IEEE international conference on big data (big data).   IEEE, 2020, pp. 188–195.
  27. M. Middlehurst, J. Large, M. Flynn, J. Lines, A. Bostrom, and A. Bagnall, “Hive-cote 2.0: a new meta ensemble for time series classification,” Machine Learning, vol. 110, no. 11, pp. 3211–3243, 2021.
  28. P. Schäfer and U. Leser, “Multivariate time series classification with weasel+ muse,” arXiv preprint arXiv:1711.11343, 2017.
  29. P. Schäfer, “The boss is concerned with time series classification in the presence of noise,” Data Mining and Knowledge Discovery, vol. 29, no. 6, pp. 1505–1530, 2015.
  30. C.-C. M. Yeh, Y. Zhu, L. Ulanova, N. Begum, Y. Ding, H. A. Dau, Z. Zimmerman, D. F. Silva, A. Mueen, and E. Keogh, “Time series joins, motifs, discords and shapelets: a unifying view that exploits the matrix profile,” Data Mining and Knowledge Discovery, vol. 32, no. 1, pp. 83–123, 2018.
  31. H. Deng, G. Runger, E. Tuv, and M. Vladimir, “A time series forest for classification and feature extraction,” Information Sciences, vol. 239, pp. 142–153, 2013.
  32. J. J. Rodriguez, L. I. Kuncheva, and C. J. Alonso, “Rotation forest: A new classifier ensemble method,” IEEE transactions on pattern analysis and machine intelligence, vol. 28, no. 10, pp. 1619–1630, 2006.
  33. Z. Wang, W. Yan, and T. Oates, “Time series classification from scratch with deep neural networks: A strong baseline,” in 2017 International joint conference on neural networks (IJCNN).   IEEE, 2017, pp. 1578–1585.
  34. C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, no. 3, pp. 273–297, 1995.
  35. E. Fix and J. L. Hodges, “Discriminatory analysis. nonparametric discrimination: Consistency properties,” International Statistical Review/Revue Internationale de Statistique, vol. 57, no. 3, pp. 238–247, 1989.
  36. M. Seeger, “Gaussian processes for machine learning,” International journal of neural systems, vol. 14, no. 02, pp. 69–106, 2004.
  37. J. R. Quinlan, “Induction of decision trees,” Machine learning, vol. 1, no. 1, pp. 81–106, 1986.
  38. T. K. Ho, “Random decision forests,” in Proceedings of 3rd international conference on document analysis and recognition, vol. 1.   IEEE, 1995, pp. 278–282.
  39. Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of computer and system sciences, vol. 55, no. 1, pp. 119–139, 1997.
  40. T. F. Chan, G. H. Golub, and R. J. LeVeque, “Updating formulae and a pairwise algorithm for computing sample variances,” in COMPSTAT 1982 5th Symposium held at Toulouse 1982.   Springer, 1982, pp. 30–41.
  41. C. R. Rao, “Large sample tests of statistical hypotheses concerning several parameters with applications to problems of estimation,” in Mathematical Proceedings of the Cambridge Philosophical Society, vol. 44.   Cambridge University Press, 1948, pp. 50–57.
  42. A. Bagnall, H. A. Dau, J. Lines, M. Flynn, J. Large, A. Bostrom, P. Southam, and E. Keogh, “The uea multivariate time series classification archive, 2018,” 2018. [Online]. Available: https://arxiv.org/abs/1811.00075
  43. R. Polikar, “Ensemble based systems in decision making,” IEEE Circuits and systems magazine, vol. 6, no. 3, pp. 21–45, 2006.
  44. T. Hartvigsen, C. Sen, X. Kong, and E. Rundensteiner, “Adaptive-halting policy network for early classification,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 101–110.

Summary

We haven't generated a summary for this paper yet.