Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Large Language Model with Self-Controlled Memory Framework (2304.13343v3)

Published 26 Apr 2023 in cs.CL

Abstract: LLMs are constrained by their inability to process lengthy inputs, resulting in the loss of critical historical information. To address this limitation, in this paper, we propose the Self-Controlled Memory (SCM) framework to enhance the ability of LLMs to maintain long-term memory and recall relevant information. Our SCM framework comprises three key components: an LLM-based agent serving as the backbone of the framework, a memory stream storing agent memories, and a memory controller updating memories and determining when and how to utilize memories from memory stream. Additionally, the proposed SCM is able to process ultra-long texts without any modification or fine-tuning, which can integrate with any instruction following LLMs in a plug-and-play paradigm. Furthermore, we annotate a dataset to evaluate the effectiveness of SCM for handling lengthy inputs. The annotated dataset covers three tasks: long-term dialogues, book summarization, and meeting summarization. Experimental results demonstrate that our method achieves better retrieval recall and generates more informative responses compared to competitive baselines in long-term dialogues. (https://github.com/wbbeyourself/SCM4LLMs)

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Xinnian Liang (20 papers)
  2. Bing Wang (246 papers)
  3. Hui Huang (159 papers)
  4. Shuangzhi Wu (29 papers)
  5. Peihao Wu (8 papers)
  6. Lu Lu (189 papers)
  7. Zejun Ma (78 papers)
  8. Zhoujun Li (122 papers)
  9. Jian Yang (503 papers)
Citations (15)
Github Logo Streamline Icon: https://streamlinehq.com