Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Breaking the Softmax Bottleneck via Learnable Monotonic Pointwise Non-linearities (1902.08077v2)

Published 21 Feb 2019 in cs.LG and stat.ML

Abstract: The Softmax function on top of a final linear layer is the de facto method to output probability distributions in neural networks. In many applications such as LLMs or text generation, this model has to produce distributions over large output vocabularies. Recently, this has been shown to have limited representational capacity due to its connection with the rank bottleneck in matrix factorization. However, little is known about the limitations of Linear-Softmax for quantities of practical interest such as cross entropy or mode estimation, a direction that we explore here. As an efficient and effective solution to alleviate this issue, we propose to learn parametric monotonic functions on top of the logits. We theoretically investigate the rank increasing capabilities of such monotonic functions. Empirically, our method improves in two different quality metrics over the traditional Linear-Softmax layer in synthetic and real LLM experiments, adding little time or memory overhead, while being comparable to the more computationally expensive mixture of Softmaxes.

Citations (18)

Summary

We haven't generated a summary for this paper yet.