Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Speaker-Sensitive Dual Memory Networks for Multi-Turn Slot Tagging (1711.10705v1)

Published 29 Nov 2017 in cs.CL

Abstract: In multi-turn dialogs, natural language understanding models can introduce obvious errors by being blind to contextual information. To incorporate dialog history, we present a neural architecture with Speaker-Sensitive Dual Memory Networks which encode utterances differently depending on the speaker. This addresses the different extents of information available to the system - the system knows only the surface form of user utterances while it has the exact semantics of system output. We performed experiments on real user data from Microsoft Cortana, a commercial personal assistant. The result showed a significant performance improvement over the state-of-the-art slot tagging models using contextual information.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Young-Bum Kim (22 papers)
  2. Sungjin Lee (46 papers)
  3. Ruhi Sarikaya (16 papers)
Citations (9)