Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Larger-Context Language Modelling (1511.03729v2)

Published 11 Nov 2015 in cs.CL

Abstract: In this work, we propose a novel method to incorporate corpus-level discourse information into LLMling. We call this larger-context LLM. We introduce a late fusion approach to a recurrent LLM based on long short-term memory units (LSTM), which helps the LSTM unit keep intra-sentence dependencies and inter-sentence dependencies separate from each other. Through the evaluation on three corpora (IMDB, BBC, and PennTree Bank), we demon- strate that the proposed model improves perplexity significantly. In the experi- ments, we evaluate the proposed approach while varying the number of context sentences and observe that the proposed late fusion is superior to the usual way of incorporating additional inputs to the LSTM. By analyzing the trained larger- context LLM, we discover that content words, including nouns, adjec- tives and verbs, benefit most from an increasing number of context sentences. This analysis suggests that larger-context LLM improves the unconditional LLM by capturing the theme of a document better and more easily.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Tian Wang (77 papers)
  2. Kyunghyun Cho (292 papers)
Citations (88)