Training Neural Networks with Internal State, Unconstrained Connectivity, and Discrete Activations (2312.14359v1)
Abstract: Today's most powerful machine learning approaches are typically designed to train stateless architectures with predefined layers and differentiable activation functions. While these approaches have led to unprecedented successes in areas such as natural language processing and image recognition, the trained models are also susceptible to making mistakes that a human would not. In this paper, we take the view that true intelligence may require the ability of a machine learning model to manage internal state, but that we have not yet discovered the most effective algorithms for training such models. We further postulate that such algorithms might not necessarily be based on gradient descent over a deep architecture, but rather, might work best with an architecture that has discrete activations and few initial topological constraints (such as multiple predefined layers). We present one attempt in our ongoing efforts to design such a training algorithm, applied to an architecture with binary activations and only a single matrix of weights, and show that it is able to form useful representations of natural language text, but is also limited in its ability to leverage large quantities of training data. We then provide ideas for improving the algorithm and for designing other training algorithms for similar architectures. Finally, we discuss potential benefits that could be gained if an effective training algorithm is found, and suggest experiments for evaluating whether these benefits exist in practice.
- A. Vaswani et al., “Attention Is All You Need,” arXiv e-prints arXiv:1706.03762, 2017.
- A. Koubaa, “Gpt-4 vs. gpt-3.5: A concise showdown,” 2023.
- A. Zou et al., “Universal and Transferable Adversarial Attacks on Aligned Language Models,” arXiv e-prints arXiv:2307.15043, 2023.
- S. Hochreiter et al., “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
- A. J. Robinson et al., “The utility driven dynamic error propagation network,” Engineering Department, Cambridge University, Cambridge, UK, Tech. Rep. CUED/F-INFENG/TR.1, 1987.
- A. Gu et al., “Mamba: Linear-Time Sequence Modeling with Selective State Spaces,” arXiv e-prints arXiv:2312.00752, 2023.
- C. Sun et al., “How to Fine-Tune BERT for Text Classification?” arXiv e-prints arXiv:1905.05583, 2019.
- A. Sperduti, “Linear autoencoder networks for structured data,” in Ninth International Workshop on Neural-Symbolic Learning and Reasoning, 2013.
- P. Smolensky, “Information processing in dynamical systems: Foundations of harmony theory,” in Parallel distributed processing: Explorations in the microstructure of cognition. Cambridge, MA: MIT Press, 1986, pp. 194–281.
- M. A. Carreira-Perpinan et al., “On contrastive divergence learning,” A. Intelligence et al., Eds., 2005.
- C. Szegedy et al., “Intriguing properties of neural networks,” arXiv e-prints arXiv:1312.6199, 2013.
- J. Uesato et al., “Adversarial Risk and the Dangers of Evaluating Against Weak Attacks,” arXiv e-prints arXiv:1802.05666, 2018.
- A. Koul et al., “Learning Finite State Representations of Recurrent Policy Networks,” arXiv e-prints arXiv:1811.12530, 2018.
- Alexander Grushin (5 papers)