Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Jointly Encoding Word Confusion Network and Dialogue Context with BERT for Spoken Language Understanding (2005.11640v3)

Published 24 May 2020 in cs.CL and cs.LG

Abstract: Spoken Language Understanding (SLU) converts hypotheses from automatic speech recognizer (ASR) into structured semantic representations. ASR recognition errors can severely degenerate the performance of the subsequent SLU module. To address this issue, word confusion networks (WCNs) have been used to encode the input for SLU, which contain richer information than 1-best or n-best hypotheses list. To further eliminate ambiguity, the last system act of dialogue context is also utilized as additional input. In this paper, a novel BERT based SLU model (WCN-BERT SLU) is proposed to encode WCNs and the dialogue context jointly. It can integrate both structural information and ASR posterior probabilities of WCNs in the BERT architecture. Experiments on DSTC2, a benchmark of SLU, show that the proposed method is effective and can outperform previous state-of-the-art models significantly.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chen Liu (206 papers)
  2. Su Zhu (29 papers)
  3. Zijian Zhao (42 papers)
  4. Ruisheng Cao (24 papers)
  5. Lu Chen (244 papers)
  6. Kai Yu (201 papers)
Citations (18)