2000 character limit reached
Turn-Level Active Learning for Dialogue State Tracking (2310.14513v1)
Published 23 Oct 2023 in cs.CL
Abstract: Dialogue state tracking (DST) plays an important role in task-oriented dialogue systems. However, collecting a large amount of turn-by-turn annotated dialogue data is costly and inefficient. In this paper, we propose a novel turn-level active learning framework for DST to actively select turns in dialogues to annotate. Given the limited labelling budget, experimental results demonstrate the effectiveness of selective annotation of dialogue turns. Additionally, our approach can effectively achieve comparable DST performance to traditional training approaches with significantly less annotated data, which provides a more efficient way to annotate new dialogue data.
- Zihan Zhang (120 papers)
- Meng Fang (100 papers)
- Fanghua Ye (30 papers)
- Ling Chen (144 papers)
- Mohammad-Reza Namazi-Rad (5 papers)