Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
106 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Recursively Summarizing Enables Long-Term Dialogue Memory in Large Language Models (2308.15022v2)

Published 29 Aug 2023 in cs.CL and cs.AI
Recursively Summarizing Enables Long-Term Dialogue Memory in Large Language Models

Abstract: Recently, LLMs, such as GPT-4, stand out remarkable conversational abilities, enabling them to engage in dynamic and contextually relevant dialogues across a wide range of topics. However, given a long conversation, these chatbots fail to recall past information and tend to generate inconsistent responses. To address this, we propose to recursively generate summaries/ memory using LLMs to enhance long-term memory ability. Specifically, our method first stimulates LLMs to memorize small dialogue contexts and then recursively produce new memory using previous memory and following contexts. Finally, the chatbot can easily generate a highly consistent response with the help of the latest memory. We evaluate our method on both open and closed LLMs, and the experiments on the widely-used public dataset show that our method can generate more consistent responses in a long-context conversation. Also, we show that our strategy could nicely complement both long-context (e.g., 8K and 16K) and retrieval-enhanced LLMs, bringing further long-term dialogue performance. Notably, our method is a potential solution to enable the LLM to model the extremely long context. The code and scripts will be released later.

An Overview of the Paper "Recursively Summarizing Enables Long-Term Dialogue Memory in LLMs"

This paper introduces an innovative approach to address the challenge faced by LLMs in maintaining consistency over long dialogues. The authors present a methodology where dialogue sessions are recursively summarized to enhance the memory capabilities of LLMs, enabling them to handle extended dialog contexts more effectively.

The crux of the paper lies in the novel recursive summarization technique designed to improve long-term memory in LLMs. The research highlights the inherent limitation of LLMs such as GPT-4, which, despite their advanced conversational abilities, can falter over long interactions by forgetting past context and generating inconsistent responses. The proposed solution leverages summarization to create a dynamic memory system that evolves as the conversation progresses. The approach involves using the LLM itself to generate summaries of small dialogue contexts and then recursively update these summaries with new information as the conversation unfolds.

The methodology is evaluated using both open and closed LLMs, with experiments conducted on a public dataset that is widely recognized within the research community. The results demonstrate that the proposed recursive summarization approach successfully produces more consistent and contextually appropriate responses in long-term conversations. Additionally, the method complements LLMs with long-context capabilities (e.g., 8K and 16K context windows) and retrieval-enhanced models, offering a potential solution for managing extremely long dialog contexts.

Noteworthy is the simplicity and effectiveness of the proposed schema, which operates as a plug-in, making it easily integrable into existing systems. The paper provides empirical evidence showing that even in the absence of ground truth memories, the system effectively generates coherent and relevant summaries, suggesting its potential as a robust tool for long-context modeling.

The implications of this work extend beyond dialogue systems. The proposed framework offers a path toward enhancing the overall contextual understanding of LLMs across various applications that require sustained context retention and coherence. Future developments could involve exploring the integration of additional memory augmentation methods or adapting the approach to other long-context tasks such as narrative generation or extensive interactive storytelling.

In conclusion, the paper delivers a valuable contribution to the ongoing research on LLMs by providing a practical and efficient method to augment their memory capabilities. The recursive summarization strategy stands out as a promising direction for ongoing and future investigations into improving the contextual breadth and dialogue continuity in LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Qingyue Wang (6 papers)
  2. Liang Ding (158 papers)
  3. Yanan Cao (34 papers)
  4. Zhiliang Tian (32 papers)
  5. Shi Wang (47 papers)
  6. Dacheng Tao (826 papers)
  7. Li Guo (184 papers)
Citations (13)