Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Beyond Word-based Language Model in Statistical Machine Translation (1502.01446v1)

Published 5 Feb 2015 in cs.CL

Abstract: LLM is one of the most important modules in statistical machine translation and currently the word-based LLM dominants this community. However, many translation models (e.g. phrase-based models) generate the target language sentences by rendering and compositing the phrases rather than the words. Thus, it is much more reasonable to model dependency between phrases, but few research work succeed in solving this problem. In this paper, we tackle this problem by designing a novel phrase-based LLM which attempts to solve three key sub-problems: 1, how to define a phrase in LLM; 2, how to determine the phrase boundary in the large-scale monolingual data in order to enlarge the training set; 3, how to alleviate the data sparsity problem due to the huge vocabulary size of phrases. By carefully handling these issues, the extensive experiments on Chinese-to-English translation show that our phrase-based LLM can significantly improve the translation quality by up to +1.47 absolute BLEU score.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jiajun Zhang (176 papers)
  2. Shujie Liu (101 papers)
  3. Mu Li (95 papers)
  4. Ming Zhou (182 papers)
  5. Chengqing Zong (65 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.