Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
104 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
40 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reranking Machine Translation Hypotheses with Structured and Web-based Language Models (2104.12277v1)

Published 25 Apr 2021 in cs.CL

Abstract: In this paper, we investigate the use of linguistically motivated and computationally efficient structured LLMs for reranking N-best hypotheses in a statistical machine translation system. These LLMs, developed from Constraint Dependency Grammar parses, tightly integrate knowledge of words, morphological and lexical features, and syntactic dependency constraints. Two structured LLMs are applied for N-best rescoring, one is an almost-parsing LLM, and the other utilizes more syntactic features by explicitly modeling syntactic dependencies between words. We also investigate effective and efficient LLMing methods to use N-grams extracted from up to 1 teraword of web documents. We apply all these LLMs for N-best re-ranking on the NIST and DARPA GALE program 2006 and 2007 machine translation evaluation tasks and find that the combination of these LLMs increases the BLEU score up to 1.6% absolutely on blind test sets.

Citations (22)

Summary

We haven't generated a summary for this paper yet.