Joint Online Spoken Language Understanding and Language Modeling with Recurrent Neural Networks (1609.01462v1)
Abstract: Speaker intent detection and semantic slot filling are two critical tasks in spoken language understanding (SLU) for dialogue systems. In this paper, we describe a recurrent neural network (RNN) model that jointly performs intent detection, slot filling, and LLMing. The neural network model keeps updating the intent estimation as word in the transcribed utterance arrives and uses it as contextual features in the joint model. Evaluation of the LLM and online SLU model is made on the ATIS benchmarking data set. On LLMing task, our joint model achieves 11.8% relative reduction on perplexity comparing to the independent training LLM. On SLU tasks, our joint model outperforms the independent task training model by 22.3% on intent detection error rate, with slight degradation on slot filling F1 score. The joint model also shows advantageous performance in the realistic ASR settings with noisy speech input.