Amortized Planning with Large-Scale Transformers: A Case Study on Chess (2402.04494v2)
Abstract: This paper uses chess, a landmark planning problem in AI, to assess transformers' performance on a planning task where memorization is futile $\unicode{x2013}$ even at a large scale. To this end, we release ChessBench, a large-scale benchmark dataset of 10 million chess games with legal move and value annotations (15 billion data points) provided by Stockfish 16, the state-of-the-art chess engine. We train transformers with up to 270 million parameters on ChessBench via supervised learning and perform extensive ablations to assess the impact of dataset size, model size, architecture type, and different prediction targets (state-values, action-values, and behavioral cloning). Our largest models learn to predict action-values for novel boards quite accurately, implying highly non-trivial generalization. Despite performing no explicit search, our resulting chess policy solves challenging chess puzzles and achieves a surprisingly strong Lichess blitz Elo of 2895 against humans (grandmaster level). We also compare to Leela Chess Zero and AlphaZero (trained without supervision via self-play) with and without search. We show that, although a remarkably good approximation of Stockfish's search-based algorithm can be distilled into large-scale transformers via supervised learning, perfect distillation is still beyond reach, thus making ChessBench well-suited for future research.
- H. Alrdahi and R. Batista-Navarro. Learning to play chess from textbooks (LEAP): a corpus for evaluating chess moves based on sentiment analysis. arXiv:2310.20260, 2023.
- Gemini: A family of highly capable multimodal models. arXiv:2312.11805, 2023.
- JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax.
- Language models are few-shot learners. In NeurIPS, 2020.
- C. Burt. Faster than thought: A symposium on digital computing machines. edited by b. v. bowden. British Journal of Statistical Psychology, 1955.
- Deep blue. Artif. Intell., 2002.
- N. Carlini. Playing chess with large language models. https://nicholas.carlini.com/writing/2023/chess-llm.html, 2023.
- R. Coulom. Whole-history rating: A bayesian rating system for players of time-varying strength. In Computers and Games, 2008.
- Representation matters: The game of chess poses a challenge to vision transformers. arXiv:2304.14918, 2023.
- Deepchess: End-to-end deep neural network for automatic learning in chess. In ICANN (2), 2016.
- The DeepMind JAX Ecosystem, 2020. URL http://github.com/google-deepmind.
- M. DeLeo and E. Guven. Learning chess with language models and transformers. arXiv:2209.11902, 2022.
- Chessgpt: Bridging policy learning and language modeling. arXiv:2306.09200, 2023.
- Convolutional sequence to sequence learning. In ICML, 2017.
- B. A. Gramaje. Exploring GPT’s capabilities in chess-puzzles. Master’s thesis, Universitat Politècnica de València, 2023.
- G. Haworth and N. Hernandez. The 20thth{}^{\mbox{th}}start_FLOATSUPERSCRIPT th end_FLOATSUPERSCRIPT top chess engine championship, TCEC20. J. Int. Comput. Games Assoc., 2021.
- Haiku: Sonnet for JAX, 2020. URL http://github.com/deepmind/dm-haiku.
- Training compute-optimal large language models. arXiv:2203.15556, 2022.
- Justaz. Exact ratings for everyone on lichess. https://lichess.org/@/justaz/blog/exact-ratings-for-everyone-on-lichess/klIoAEAU, 2023.
- Sentimate: Learning to play chess through natural language processing. arXiv:1907.08321, 2019.
- D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR (Poster), 2015.
- D. Klein. Neural networks for chess. arXiv:2209.01506, 2022.
- M. Lai. Giraffe: Using deep reinforcement learning to play chess. arXiv:1509.01549, 2015.
- OpenAI. GPT-4 technical report. arXiv:2303.08774, 2023.
- Stockfish, 2008. URL https://stockfishchess.org.
- M. Sadler and N. Regan. Game Changer: AlphaZero’s Groundbreaking Chess Strategies and the Promise of AI. New In Chess, 2019.
- Mastering atari, go, chess and shogi by planning with a learned model. Nat., 2020.
- N. Shazeer. GLU variants improve transformer. arXiv:2002.05202, 2020.
- Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv:1712.01815, 2017.
- A. Stöckl. Watching a language model learning chess. In RANLP, 2021.
- S. Thrun. Learning to play the game of chess. In NIPS, 1994.
- Chess as a testbed for language model state tracking. In AAAI, 2022.
- Llama: Open and efficient foundation language models. arXiv:2302.13971, 2023a.
- Llama 2: Open foundation and fine-tuned chat models. arXiv:2307.09288, 2023b.
- Deep pepper: Expert iteration based chess agent in the reinforcement learning setting. arXiv:1806.00683, 2018.
- Attention is all you need. In NIPS, 2017.