Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Think before you speak: Training Language Models With Pause Tokens (2310.02226v3)

Published 3 Oct 2023 in cs.CL, cs.AI, and cs.LG

Abstract: LLMs generate responses by producing a series of tokens in immediate succession: the $(K+1){th}$ token is an outcome of manipulating $K$ hidden vectors per layer, one vector per preceding token. What if instead we were to let the model manipulate say, $K+10$ hidden vectors, before it outputs the $(K+1){th}$ token? We operationalize this idea by performing training and inference on LLMs with a (learnable) $\textit{pause}$ token, a sequence of which is appended to the input prefix. We then delay extracting the model's outputs until the last pause token is seen, thereby allowing the model to process extra computation before committing to an answer. We empirically evaluate $\textit{pause-training}$ on decoder-only models of 1B and 130M parameters with causal pretraining on C4, and on downstream tasks covering reasoning, question-answering, general understanding and fact recall. Our main finding is that inference-time delays show gains when the model is both pre-trained and finetuned with delays. For the 1B model, we witness gains on 8 of 9 tasks, most prominently, a gain of $18\%$ EM score on the QA task of SQuAD, $8\%$ on CommonSenseQA and $1\%$ accuracy on the reasoning task of GSM8k. Our work raises a range of conceptual and practical future research questions on making delayed next-token prediction a widely applicable new paradigm.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. Rl4f: Generating natural language feedback with reinforcement learning for repairing model outputs. In Annual Meeting of the Association for Computational Linguistics, 2023. URL https://api.semanticscholar.org/CorpusID:258685337.
  2. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp.  1533–1544, Seattle, Washington, USA, October 2013. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/D13-1160.
  3. Piqa: Reasoning about physical commonsense in natural language. In Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020.
  4. Recurrent memory transformer. In NeurIPS, 2022.
  5. Memory transformer. arXiv preprint arXiv:2006.11527, 2020.
  6. Multi-cls bert: An efficient alternative to traditional ensembling, 2023.
  7. Training verifiers to solve math word problems, 2021.
  8. Vision transformers need registers, 2023.
  9. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019.
  10. Critic: Large language models can self-correct with tool-interactive critiquing. ArXiv, abs/2305.11738, 2023. URL https://api.semanticscholar.org/CorpusID:258823123.
  11. WARP: Word-level Adversarial ReProgramming. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp.  4921–4933, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.381. URL https://aclanthology.org/2021.acl-long.381.
  12. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics, 2019.
  13. Learning to reason and memorize with self-notes, 2023.
  14. Measuring faithfulness in chain-of-thought reasoning, 2023.
  15. The power of scale for parameter-efficient prompt tuning, 2021.
  16. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp.  4582–4597, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.353. URL https://aclanthology.org/2021.acl-long.353.
  17. Gpt understands, too, 2021.
  18. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks, 2022.
  19. Yang Liu. Fine-tune bert for extractive summarization, 2019.
  20. Few-shot sequence learning with transformers, 2020.
  21. Text and patterns: For effective chain of thought, it takes two to tango, 2022.
  22. Self-refine: Iterative refinement with self-feedback, 2023.
  23. Show your work: Scratchpads for intermediate computation with language models, 2021.
  24. The lambada dataset: Word prediction requiring a broad discourse context, 2016.
  25. Learning how to ask: Querying LMs with mixtures of soft prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.  5203–5212, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.410. URL https://aclanthology.org/2021.naacl-main.410.
  26. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv e-prints, 2019.
  27. Squad: 100,000+ questions for machine comprehension of text, 2016.
  28. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249–266, 2019. doi: 10.1162/tacl˙a˙00266. URL https://aclanthology.org/Q19-1016.
  29. It’s not just size that matters: Small language models are also few-shot learners, 2021.
  30. Challenging big-bench tasks and whether chain-of-thought can solve them, 2022.
  31. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp.  4149–4158, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1421. URL https://aclanthology.org/N19-1421.
  32. John Thickstun. The transformer model in equations. University of Washington: Seattle, WA, USA, 2021.
  33. Language models don’t always say what they think: Unfaithful explanations in chain-of-thought prompting, 2023.
  34. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, pp.  5998–6008, 2017.
  35. Towards understanding chain-of-thought prompting: An empirical study of what matters, 2023a.
  36. Self-consistency improves chain of thought reasoning in language models, 2023b.
  37. Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS, 2022.
  38. An embarrassingly simple model for dialogue relation extraction. In ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, may 2022. doi: 10.1109/icassp43922.2022.9747486. URL https://doi.org/10.1109%2Ficassp43922.2022.9747486.
  39. Tree of thoughts: Deliberate problem solving with large language models, 2023a.
  40. Retroformer: Retrospective large language agents with policy gradient optimization, 2023b.
  41. Star: Bootstrapping reasoning with reasoning, 2022.
  42. Hellaswag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019.
  43. Factual probing is [MASK]: Learning vs. learning to recall. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.  5017–5033, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.398. URL https://aclanthology.org/2021.naacl-main.398.
  44. Least-to-most prompting enables complex reasoning in large language models, 2023.
Citations (56)

Summary

  • The paper demonstrates that incorporating pause tokens during both pretraining and finetuning yields significant performance gains, like an 18% increase on the SQuAD QA task.
  • The paper shows that delaying token generation allows Transformer models to perform extra computations, enhancing both memory and attention mechanisms.
  • The paper highlights that tuning the number of pause tokens per task is crucial for achieving optimal performance in various downstream applications.

An Expert Analysis: Think Before You Speak - Training LLMs with Pause Tokens

The paper "Think before you speak: Training LLMs With Pause Tokens" introduces an innovative approach to training Transformer-based LLMs by incorporating pause tokens during training and inference. This technique diverges from the norm of generating tokens in immediate succession and instead suggests delaying this process to enhance the model's computational performance.

Overview

The central hypothesis of the paper is that the conventional method of producing tokens in immediate succession may constrain the computational depth of Transformer models. By adding learnable pause tokens in the input sequence, the model can utilize these tokens to process extra computations before outputting the next token. The concept is evaluated empirically using decoder-only models with sizes of 1B and 130M parameters, trained on the C4 dataset and finetuned on a range of downstream tasks.

Key Findings

  1. Inference-Time Gains: The paper finds substantial improvements in performance when both pre-training and finetuning processes involve pause tokens. For instance, the 1B-parameter model shows an 18% increase in exact match (EM) score on the SQuAD QA task, an 8% increase on CommonSenseQA, and a 1% increase in accuracy on GSM8k.
  2. Impact of Finetuning Alone: Introducing pause tokens only during finetuning yielded mixed results, showing benefits in fewer instances and, at times, even degrading performance.
  3. Pretraining Alone Is Not Sufficient: Pause-pretrained models without pause during finetuning did not consistently offer improvements, indicating the need for delays during both stages for meaningful gains.
  4. Optimal Number of Pauses: Each downstream task appears to have an optimal number of pause tokens, suggesting a nuanced dependency of performance gains on the specific configuration of pauses.

Theoretical Insights

The incorporation of pause tokens allows for a fundamental alteration in the computational pathway of the Transformer layers. The pause tokens serve as a mechanism to increase the width of computations, enabling the model to leverage additional computational steps before outputting the next token. This hypothesis is supported theoretically:

Main Theoretical Result

Given a Transformer architecture, the self-attention mechanism within each layer has a representational capacity that is bounded by the number of parameters rather than input length. The introduction of pause tokens helps in utilizing this representational capacity more effectively by offering additional computational steps. Therefore, tasks that require a larger number of parallel operations than the input tokens can be better modeled with pause tokens.

Practical Implications

  1. Adaptive Compute: The technique provides a pathway for adaptive compute, wherein the inference time can be adjusted based on the system’s computational limitations, making it practical for deployment in scenarios with varying computational budgets.
  2. Memory and Attention: The method offers a potential framework for enhancing memory and attention mechanisms in Transformers by introducing additional computational steps without adding significant parameters.

Future Directions

Several promising avenues for future research are identified:

  • Improved Robustness: Designing models that are robust to zero-delay scenarios and varying numbers of pause tokens during inference is crucial.
  • Applicability Across Models: Extending this approach to more diverse architectures, such as encoder-decoder models and larger parameter models, would test the generalizability of the findings.
  • Theoretical Exploration: Further theoretical examination is required to formalize the distinction between a Transformer's raw and implementational capacities, particularly in contexts where more complex computational patterns are required.

Conclusion

The introduction of pause tokens in training Transformer-based LLMs represents a notable shift in enhancing computational capabilities. By strategically delaying token generation, the model can leverage additional parallel computational steps, thereby improving performance on various downstream tasks. This paper opens up new paradigms for future explorations in the structure and computational pathways of LLMs.

Youtube Logo Streamline Icon: https://streamlinehq.com