Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

"My agent understands me better": Integrating Dynamic Human-like Memory Recall and Consolidation in LLM-Based Agents (2404.00573v1)

Published 31 Mar 2024 in cs.HC

Abstract: In this study, we propose a novel human-like memory architecture designed for enhancing the cognitive abilities of LLM based dialogue agents. Our proposed architecture enables agents to autonomously recall memories necessary for response generation, effectively addressing a limitation in the temporal cognition of LLMs. We adopt the human memory cue recall as a trigger for accurate and efficient memory recall. Moreover, we developed a mathematical model that dynamically quantifies memory consolidation, considering factors such as contextual relevance, elapsed time, and recall frequency. The agent stores memories retrieved from the user's interaction history in a database that encapsulates each memory's content and temporal context. Thus, this strategic storage allows agents to recall specific memories and understand their significance to the user in a temporal context, similar to how humans recognize and recall past experiences.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. Hafeez Ullah Amin and Aamir Malik. 2014. Memory Retention and Recall Process. 219–237. https://doi.org/10.1201/b17605-11
  2. The human hippocampus and spatial and episodic memory. Neuron 35, 4 (2002), 625–641.
  3. S.D.L.R.S.P.P.U. California. 1987. Memory and Brain. Oxford University Press, USA. https://books.google.co.jp/books?id=WH-HF5E9XSsC
  4. Antonio Chessa and Jaap Murre. 2007. A Neurocognitive Model of Advertisement Content and Brand Name Recall. Marketing Science 26 (01 2007), 130–141. https://doi.org/10.1287/mksc.1060.0212
  5. Xuan-Quy Dao. 2023. Performance comparison of large language models on vnhsge english dataset: Openai chatgpt, microsoft bing chat, and google bard. arXiv preprint arXiv:2307.02288 (2023).
  6. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL]
  7. Firebase. 2023. Firestore. https://firebase.google.com/docs/firestore?hl=ja. (Accessed on 01/18/2024).
  8. Transformer Feed-Forward Layers Are Key-Value Memories. arXiv:2012.14913 [cs.CL]
  9. Suppression lateralise du materiel verbal presente dichotiquement lors d’une destruction partielle du corps calleux. Neuropsychologia 16, 2 (1978), 233–237.
  10. Anthony Holtmaat and Pico Caroni. 2016. Functional and structural underpinnings of neuronal assembly formation in learning. Nature neuroscience 19, 12 (2016), 1553–1562.
  11. A Machine With Human-Like Memory Systems. arXiv:2204.01611 [cs.AI]
  12. J. F. C. Kingman. 1993. Poisson Processes. Oxford University Press.
  13. Beatrice G Kuhlmann. 2019. Metacognition of prospective memory: Will I remember to remember? Prospective memory (2019), 60–77.
  14. A survey of transformers. AI Open (2022).
  15. Danilo P Mandic and Jonathon Chambers. 2001. Recurrent neural networks for prediction: learning algorithms, architectures and stability. John Wiley & Sons, Inc.
  16. Altering memory through recall: The effects of cue-guided retrieval processing. Memory & Cognition 17, 4 (1989), 423–434.
  17. OpenAI. 2023. ChatGPT. https://chat.openai.com/. (November 22 version) [Large language model].
  18. Generative Agents: Interactive Simulacra of Human Behavior. arXiv:2304.03442 [cs.HC]
  19. Lloyd Peterson and Margaret Jean Peterson. 1959. Short-Term Retention of Individual Verbal Items. Journal of Experimental Psychology 58, 3 (1959), 193. https://doi.org/10.1037/h0049234
  20. Qdrant. 2023. Vector Database. https://qdrant.tech/. (Accessed on 01/17/2024).
  21. Henry Roediger and Jeffrey Karpicke. 2006. Test-Enhanced Learning Taking Memory Tests Improves Long-Term Retention. Psychological science 17 (04 2006), 249–55. https://doi.org/10.1111/j.1467-9280.2006.01693.x
  22. How to fine-tune bert for text classification?. In Chinese Computational Linguistics: 18th China National Conference, CCL 2019, Kunming, China, October 18–20, 2019, Proceedings 18. Springer, 194–206.
  23. LSTM neural networks for language modeling. In Thirteenth annual conference of the international speech communication association.
  24. Endel Tulving. 2002. Episodic Memory: From Mind to Brain. Annual Review of Psychology 53, 1 (2002), 1–25. https://doi.org/10.1146/annurev.psych.53.100901.135114
  25. Endel Tulving et al. 1972. Episodic and semantic memory. Organization of memory 1, 381-403 (1972), 1.
  26. Guido Van Rossum and Fred L. Drake. 2009. Python 3 Reference Manual. CreateSpace, Scotts Valley, CA.
  27. Atsushi Yamadori. 2002. Frontiers of Human Memory : a collection of contributions based on lectures presented at Internationl Symposium, Sendai, Japan, October 25-27, 2001. Tohoku University Press. https://ci.nii.ac.jp/ncid/BA57511014
  28. MemoryBank: Enhancing Large Language Models with Long-Term Memory. arXiv:2305.10250 [cs.CL]
  29. Hubert A. Zielske. 1959. The Remembering and Forgetting of Advertising. Journal of Marketing 23 (1959), 239 – 243. https://api.semanticscholar.org/CorpusID:167354194
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yuki Hou (2 papers)
  2. Haruki Tamoto (2 papers)
  3. Homei Miyashita (2 papers)
Citations (9)