Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stateful Memory-Augmented Transformers for Efficient Dialogue Modeling (2209.07634v2)

Published 15 Sep 2022 in cs.CL

Abstract: Transformer encoder-decoder models have achieved great performance in dialogue generation tasks, however, their inability to process long dialogue history often leads to truncation of the context To address this problem, we propose a novel memory-augmented transformer that is compatible with existing pre-trained encoder-decoder models and enables efficient preservation of the dialogue history information. By incorporating a separate memory module alongside the pre-trained transformer, the model can effectively interchange information between the memory states and the current input context. We evaluate our model on three dialogue datasets and two LLMing datasets. Experimental results show that our method has achieved superior efficiency and performance compared to other pre-trained Transformer baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Qingyang Wu (29 papers)
  2. Zhou Yu (206 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com