Papers
Topics
Authors
Recent
Search
2000 character limit reached

Chronologically Consistent Large Language Models

Published 28 Feb 2025 in q-fin.GN and q-fin.TR | (2502.21206v2)

Abstract: LLMs are increasingly used in social sciences, but their training data can introduce lookahead bias and training leakage. A good chronologically consistent LLM requires efficient use of training data to maintain accuracy despite time-restricted data. Here, we overcome this challenge by training a suite of chronologically consistent LLMs, ChronoBERT and ChronoGPT, which incorporate only the text data that would have been available at each point in time. Despite this strict temporal constraint, our models achieve strong performance on natural language processing benchmarks, outperforming or matching widely used models (e.g., BERT), and remain competitive with larger open-weight models. Lookahead bias is model and application-specific because even if a chronologically consistent LLM has poorer language comprehension, a regression or prediction model applied on top of the LLM can compensate. In an asset pricing application predicting next-day stock returns from financial news, we find that ChronoBERT's real-time outputs achieve a Sharpe ratio comparable to state-of-the-art models, indicating that lookahead bias is modest. Our results demonstrate a scalable, practical framework to mitigate training leakage, ensuring more credible backtests and predictions across finance and other social science domains.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.