Parameterized Neural Network Language Models for Information Retrieval (1510.01562v1)
Abstract: Information Retrieval (IR) models need to deal with two difficult issues, vocabulary mismatch and term dependencies. Vocabulary mismatch corresponds to the difficulty of retrieving relevant documents that do not contain exact query terms but semantically related terms. Term dependencies refers to the need of considering the relationship between the words of the query when estimating the relevance of a document. A multitude of solutions has been proposed to solve each of these two problems, but no principled model solve both. In parallel, in the last few years, LLMs based on neural networks have been used to cope with complex natural language processing tasks like emotion and paraphrase detection. Although they present good abilities to cope with both term dependencies and vocabulary mismatch problems, thanks to the distributed representation of words they are based upon, such models could not be used readily in IR, where the estimation of one LLM per document (or query) is required. This is both computationally unfeasible and prone to over-fitting. Based on a recent work that proposed to learn a generic LLM that can be modified through a set of document-specific parameters, we explore use of new neural network models that are adapted to ad-hoc IR tasks. Within the LLM IR framework, we propose and study the use of a generic LLM as well as a document-specific LLM. Both can be used as a smoothing component, but the latter is more adapted to the document at hand and has the potential of being used as a full document LLM. We experiment with such models and analyze their results on TREC-1 to 8 datasets.
- Benjamin Piwowarski (38 papers)
- Sylvain Lamprier (40 papers)
- Nicolas Despres (1 paper)