Papers
Topics
Authors
Recent
Search
2000 character limit reached

Know Your Limits: Entropy Estimation Modeling for Compression and Generalization

Published 13 Nov 2025 in cs.CL, cs.AI, cs.IT, and cs.LG | (2511.10618v1)

Abstract: Language prediction is constrained by informational entropy intrinsic to language, such that there exists a limit to how accurate any LLM can become and equivalently a lower bound to language compression. The most efficient language compression algorithms today are causal (next token prediction) LLMs, but the use of these models to form accurate estimates of language entropy is currently computationally infeasible. We introduce encoder-augmented causal decoder model architectures that exhibit superior training efficiency characteristics and achieve higher compression than causal transformers even when trained on modest hardware. We demonstrate how entropy estimates can be obtained on a per-token basis, and show that the generalization of models trained to approach the entropy of their training data necessarily exceeds the generalization of models trained to minimize loss beyond this value. We show empirically that causal models trained to approach but not exceed estimated per-token entropies exhibit greater generalization than models trained without taking entropy into account.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.