2000 character limit reached
Quantifying Uncertainties in Natural Language Processing Tasks (1811.07253v1)
Published 18 Nov 2018 in cs.CL, cs.AI, cs.LG, and cs.NE
Abstract: Reliable uncertainty quantification is a first step towards building explainable, transparent, and accountable artificial intelligent systems. Recent progress in Bayesian deep learning has made such quantification realizable. In this paper, we propose novel methods to study the benefits of characterizing model and data uncertainties for NLP tasks. With empirical experiments on sentiment analysis, named entity recognition, and LLMing using convolutional and recurrent neural network models, we show that explicitly modeling uncertainties is not only necessary to measure output confidence levels, but also useful at enhancing model performances in various NLP tasks.