Markovian Transformers for Informative Language Modeling (2404.18988v4)
Abstract: Chain-of-Thought (CoT) reasoning holds great promise for explaining LLM outputs, but recent studies have highlighted significant challenges in its practical application for interpretability. We propose to address this issue by making CoT causally essential to prediction through two key components: factoring next-token prediction through intermediate CoT text, and training CoT to predict future tokens independently of other context. This results in "Markovian" LLMs, where CoT serves as a fixed-size state for future token prediction. Our approach optimizes for "informativeness" - the improvement in next-token predictions using a trained CoT compared to a baseline. Using Proximal Policy Optimization (PPO) for arithmetic problems and policy gradient for GSM8K, we demonstrate effectiveness on both arithmetic problems with Mistral 7B and the GSM8K benchmark with Llama 3.1 8B, where the model learns to produce CoTs that are 33.20% more effective at predicting answers than the pre-trained baseline. The increased sensitivity of model performance to CoT perturbations provides strong evidence of CoT reliance. Furthermore, we show that CoTs trained for one model generalize to help other models predict answers, suggesting these CoTs capture reasoning patterns that transfer across different interpreters. This work advances the development of more interpretable LLMs, potentially enabling their extension to arbitrarily long contexts and enhancing AI reasoning capabilities across various domains.
- Thinking fast and slow with deep learning and tree search. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/d8e1344e27a5b08cdfd5d027d9b8d6de-Paper.pdf.
- Constitutional AI: Harmlessness from AI Feedback, 2022. URL https://arxiv.org/abs/2212.08073.
- Chain-of-thought unfaithfulness as disguised accuracy, 2024.
- Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 1877–1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
- Discovering latent knowledge in language models without supervision, 2024.
- Eliciting latent knowledge: How to tell if your eyes deceive you, December 2021. URL https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit.
- A free energy principle for the brain. Journal of Physiology-Paris, 100(1-3):70–87, 2006. doi: 10.1016/j.jphysparis.2006.10.001. URL https://www.sciencedirect.com/science/article/pii/S092842570600060X.
- Risks from language models for automated mental healthcare: Ethics and structure for implementation. medRxiv, 2024. doi: 10.1101/2024.04.07.24305462. URL https://www.medrxiv.org/content/early/2024/04/08/2024.04.07.24305462.
- Mamba: Linear-time sequence modeling with selective state spaces, 2023.
- Combining recurrent, convolutional, and continuous-time models with linear state space layers. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id=yWd42CWN3c.
- Efficiently modeling long sequences with structured state spaces. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=uYLFoz1vlAC.
- Language models represent space and time, 2024.
- Training chain-of-thought via latent-variable inference. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=a147pIS2Co.
- Christopher Hookway. William james: Pragmatism: A new name for some old ways of thinking. In John Shand (ed.), Central Works of Philosophy, pp. 54–70. Acumen Publishing, 2005.
- Introduction to Automata Theory, Languages, and Computation (3rd Edition). Addison-Wesley Longman Publishing Co., Inc., USA, 2006. ISBN 0321455363.
- LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=nZeVKeeFYf9.
- Categorical reparameterization with gumbel-softmax. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=rkE3y85ee.
- Mistral 7b, 2023. URL https://arxiv.org/abs/2310.06825. Version 1.
- Personas as a way to model truthfulness in language models, 2024. URL https://doi.org/10.48550/arXiv.2310.18168. arXiv:2310.18168v5 [cs.CL].
- Large language models are zero-shot reasoners. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=e2TBb5y0yFf.
- Post hoc explanations of language models can improve language models. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=3H37XciUEv.
- Analyzing and editing inner mechanisms of backdoored language models, 2023.
- Human vs. machine: Language models and wargames, 2024.
- Measuring faithfulness in chain-of-thought reasoning, 2023.
- TruthfulQA: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 2022. URL https://arxiv.org/abs/2109.07958. ACL 2022 (main conference).
- Faithful chain-of-thought reasoning, 2023.
- Progress measures for grokking via mechanistic interpretability, 2023.
- Polynomial functors: A mathematical theory of interaction, 2023.
- Nostalgebraist. Interpreting GPT: The Logit Lens. LessWrong, 2020. URL https://www.lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens.
- Show your work: Scratchpads for intermediate computation with language models, 2022. URL https://openreview.net/forum?id=iedYJm92o0a.
- Aligning large and small language models via chain-of-thought reasoning. In Yvette Graham and Matthew Purver (eds.), Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1812–1827, St. Julian’s, Malta, March 2024. Association for Computational Linguistics. URL https://aclanthology.org/2024.eacl-long.109.
- Escalation risks from language models in military and diplomatic decision-making, 2024.
- Proximal policy optimization algorithms, 2017.
- Mastering the game of go with deep neural networks and tree search. Nature, 529:484–489, 2016.
- Mastering chess and shogi by self-play with a general reinforcement learning algorithm, 2017. URL https://doi.org/10.48550/arXiv.1712.01815. arXiv:1712.01815 [cs.AI].
- Policy gradient methods for reinforcement learning with function approximation. In Proceedings of the 12th International Conference on Neural Information Processing Systems, NIPS’99, pp. 1057–1063, Cambridge, MA, USA, 1999. MIT Press.
- CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4149–4158, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1421. URL https://aclanthology.org/N19-1421.
- Fine-tuning language models for factuality, 2023. URL https://doi.org/10.48550/arXiv.2311.08401. arXiv:2311.08401 [cs.CL].
- Llama 2: Open foundation and fine-tuned chat models, 2023. URL https://arxiv.org/abs/2307.09288v2. Version 2.
- Chain of thought prompting elicits reasoning in large language models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=_VjQlMeSB_J.
- Reference-aware language models. In Martha Palmer, Rebecca Hwa, and Sebastian Riedel (eds.), Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 1850–1859, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1197. URL https://aclanthology.org/D17-1197.
- Star: Bootstrapping reasoning with reasoning. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 15476–15488. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/639a9a172c044fbb64175b5fad42e9a5-Paper-Conference.pdf.
- Quiet-star: Language models can teach themselves to think before speaking, 2024.
- Scott Viteri (3 papers)
- Max Lamparth (17 papers)
- Peter Chatain (3 papers)
- Clark Barrett (86 papers)