Generalized Convergence Analysis of Tsetlin Machines: A Probabilistic Approach to Concept Learning (2310.02005v1)
Abstract: Tsetlin Machines (TMs) have garnered increasing interest for their ability to learn concepts via propositional formulas and their proven efficiency across various application domains. Despite this, the convergence proof for the TMs, particularly for the AND operator (\emph{conjunction} of literals), in the generalized case (inputs greater than two bits) remains an open problem. This paper aims to fill this gap by presenting a comprehensive convergence analysis of Tsetlin automaton-based Machine Learning algorithms. We introduce a novel framework, referred to as Probabilistic Concept Learning (PCL), which simplifies the TM structure while incorporating dedicated feedback mechanisms and dedicated inclusion/exclusion probabilities for literals. Given $n$ features, PCL aims to learn a set of conjunction clauses $C_i$ each associated with a distinct inclusion probability $p_i$. Most importantly, we establish a theoretical proof confirming that, for any clause $C_k$, PCL converges to a conjunction of literals when $0.5<p_k<1$. This result serves as a stepping stone for future research on the convergence properties of Tsetlin automaton-based learning algorithms. Our findings not only contribute to the theoretical understanding of Tsetlin Machines but also have implications for their practical application, potentially leading to more robust and interpretable machine learning models.
- Building Concise Logical Patterns by Constraining Tsetlin Machine Clause Size. In International Joint Conference on Artificial Intelligence (IJCAI), 2023.
- Massively Parallel and Asynchronous Tsetlin Machine Architecture Supporting Almost Constant-Time Scaling. In International Conference on Machine Learning (ICML), 2021.
- The Regression Tsetlin Machine - A Novel Approach to Interpretable Non-Linear Regression. Philosophical Transactions of the Royal Society A, 378, 2020.
- Dana Angluin. Queries and concept learning. Machine learning, 2:319–342, 1988.
- Word-level Human Interpretable Scoring Mechanism for Novel Text Detection Using Tsetlin Machines. Applied Intelligence, 52:17465–17489, 2022.
- Ole-Christoffer Granmo. The Tsetlin Machine - A Game Theoretic Bandit Driven Approach to Optimal Pattern Recognition with Propositional Logic. arXiv preprint arXiv:1804.01508, 2018.
- The Convolutional Tsetlin Machine. arXiv preprint arXiv:1905.09688, 2019.
- On the Convergence of Tsetlin Machines for the AND and the OR Operators. arXiv preprint https://arxiv.org/abs/2109.09488, 2021.
- On the Convergence of Tsetlin Machines for the XOR operator. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(5):6072–6085, 2023.
- Tom M Mitchell. Machine learning, volume 1. McGraw-hill New York, 1997.
- Tsetlin Machine for Solving Contextual Bandit Problems. In Neural Information Processing Systems (NeurIPS), 2022.
- Drop Clause: Enhancing Performance, Robustness and Pattern Recognition Capabilities of the Tsetlin Machine. In the AAAI Conference on Artificial Intelligence (AAAI), 2023.
- Michael Lvovitch Tsetlin. On Behaviour of Finite Automata in Random Medium. Avtomat. i Telemekh, 22(10):1345–1354, 1961.
- Leslie G Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984.
- Human-Level Interpretable Learning for Aspect-Based Sentiment Analysis. In the AAAI Conference on Artificial Intelligence (AAAI), 2021.
- On the Convergence of Tsetlin Machines for the IDENTITY- and NOT Operators. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10):6345–6359, 2022.