Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Joint Online Spoken Language Understanding and Language Modeling with Recurrent Neural Networks (1609.01462v1)

Published 6 Sep 2016 in cs.CL

Abstract: Speaker intent detection and semantic slot filling are two critical tasks in spoken language understanding (SLU) for dialogue systems. In this paper, we describe a recurrent neural network (RNN) model that jointly performs intent detection, slot filling, and LLMing. The neural network model keeps updating the intent estimation as word in the transcribed utterance arrives and uses it as contextual features in the joint model. Evaluation of the LLM and online SLU model is made on the ATIS benchmarking data set. On LLMing task, our joint model achieves 11.8% relative reduction on perplexity comparing to the independent training LLM. On SLU tasks, our joint model outperforms the independent task training model by 22.3% on intent detection error rate, with slight degradation on slot filling F1 score. The joint model also shows advantageous performance in the realistic ASR settings with noisy speech input.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Bing Liu (212 papers)
  2. Ian Lane (29 papers)
Citations (103)

Summary

We haven't generated a summary for this paper yet.