Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Extending Context Window of Large Language Models via Semantic Compression (2312.09571v1)

Published 15 Dec 2023 in cs.CL, cs.IT, and math.IT

Abstract: Transformer-based LLMs often impose limitations on the length of the text input to ensure the generation of fluent and relevant responses. This constraint restricts their applicability in scenarios involving long texts. We propose a novel semantic compression method that enables generalization to texts that are 6-8 times longer, without incurring significant computational costs or requiring fine-tuning. Our proposed framework draws inspiration from source coding in information theory and employs a pre-trained model to reduce the semantic redundancy of long inputs before passing them to the LLMs for downstream tasks. Experimental results demonstrate that our method effectively extends the context window of LLMs across a range of tasks including question answering, summarization, few-shot learning, and information retrieval. Furthermore, the proposed semantic compression method exhibits consistent fluency in text generation while reducing the associated computational overhead.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. Exploring length generalization in large language models. Advances in Neural Information Processing Systems, 35:38546–38556, 2022.
  2. Longbench: A bilingual, multitask benchmark for long context understanding. arXiv preprint arXiv:2308.14508, 2023.
  3. Longformer: The long-document transformer. arXiv:2004.05150, 2020.
  4. PENG Bo. Blinkdl/rwkv-lm: 0.01, August 2021. URL https://doi.org/10.5281/zenodo.5196577.
  5. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595, 2023a.
  6. Longlora: Efficient fine-tuning of long-context large language models. arXiv, 2023b.
  7. Adapting language models to compress contexts. ArXiv, abs/2305.14788, 2023. URL https://api.semanticscholar.org/CorpusID:258865249.
  8. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp.  2978–2988, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1285. URL https://aclanthology.org/P19-1285.
  9. FlashAttention: Fast and memory-efficient exact attention with IO-awareness. In Advances in Neural Information Processing Systems, 2022.
  10. A dataset of information-seeking questions and answers anchored in research papers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.  4599–4610, 2021.
  11. Longnet: Scaling transformers to 1,000,000,000 tokens. In Proceedings of the 10th International Conference on Learning Representations, 2023.
  12. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp.  1074–1084, 2019.
  13. Constructing a multi-hop QA dataset for comprehensive evaluation of reasoning steps. In Donia Scott, Nuria Bel, and Chengqing Zong (eds.), Proceedings of the 28th International Conference on Computational Linguistics, pp.  6609–6625, Barcelona, Spain (Online), December 2020. International Committee on Computational Linguistics. doi: 10.18653/v1/2020.coling-main.580. URL https://aclanthology.org/2020.coling-main.580.
  14. Efficient attentions for long document summarization. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou (eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.  1419–1436, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.112. URL https://aclanthology.org/2021.naacl-main.112.
  15. Advancing transformer architecture in long-context large language models: A comprehensive survey, 2023.
  16. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317–328, 2018.
  17. Learning question classifiers. In COLING 2002: The 19th International Conference on Computational Linguistics, 2002.
  18. Landmark attention: Random-access infinite context length for transformers. arXiv preprint arXiv:2305.16300, 2023.
  19. Yarn: Efficient context window extension of large language models, 2023.
  20. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9, 2019.
  21. Scrolls: Standardized comparison over long language sequences. arXiv preprint arXiv:2201.03533, 2022.
  22. William Strunk Jr. The Elements of Style. Penguin, 2007.
  23. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
  24. MuSiQue: Multihop questions via single-hop question composition. Transactions of the Association for Computational Linguistics, 10:539–554, 2022. doi: 10.1162/tacl˙a˙00475. URL https://aclanthology.org/2022.tacl-1.31.
  25. Recursively summarizing enables long-term dialogue memory in large language models. arXiv preprint arXiv:2308.15022, 2023.
  26. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. Advances in Neural Information Processing Systems, 33:5776–5788, 2020.
  27. Prompt compression and contrastive conditioning for controllability and toxicity reduction in language models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pp.  5621–5634, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-emnlp.412. URL https://aclanthology.org/2022.findings-emnlp.412.
  28. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018.
  29. Qmsum: A new benchmark for query-based multi-domain meeting summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.  5905–5921, 2021.
  30. Recurrentgpt: Interactive generation of (arbitrarily) long text, 2023.
  31. George Kingsley Zipf. Human Behavior and the Principle of Least Effort: An Introduction to Human Ecology. Ravenio Books, 2016.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Weizhi Fei (8 papers)
  2. Xueyan Niu (15 papers)
  3. Pingyi Zhou (9 papers)
  4. Lu Hou (50 papers)
  5. Bo Bai (71 papers)
  6. Lei Deng (81 papers)
  7. Wei Han (202 papers)
Citations (17)