Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Methods to integrate a language model with semantic information for a word prediction component (0801.4716v1)

Published 30 Jan 2008 in cs.CL

Abstract: Most current word prediction systems make use of n-gram LLMs (LM) to estimate the probability of the following word in a phrase. In the past years there have been many attempts to enrich such LLMs with further syntactic or semantic information. We want to explore the predictive powers of Latent Semantic Analysis (LSA), a method that has been shown to provide reliable information on long-distance semantic dependencies between words in a context. We present and evaluate here several methods that integrate LSA-based information with a standard LLM: a semantic cache, partial reranking, and different forms of interpolation. We found that all methods show significant improvements, compared to the 4-gram baseline, and most of them to a simple cache model as well.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Tonio Wandmacher (1 paper)
  2. Jean-Yves Antoine (2 papers)
Citations (43)

Summary

We haven't generated a summary for this paper yet.