Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Controllable Sparse Alternatives to Softmax (1810.11975v2)

Published 29 Oct 2018 in cs.LG, cs.CL, and stat.ML

Abstract: Converting an n-dimensional vector to a probability distribution over n objects is a commonly used component in many machine learning tasks like multiclass classification, multilabel classification, attention mechanisms etc. For this, several probability mapping functions have been proposed and employed in literature such as softmax, sum-normalization, spherical softmax, and sparsemax, but there is very little understanding in terms how they relate with each other. Further, none of the above formulations offer an explicit control over the degree of sparsity. To address this, we develop a unified framework that encompasses all these formulations as special cases. This framework ensures simple closed-form solutions and existence of sub-gradients suitable for learning via backpropagation. Within this framework, we propose two novel sparse formulations, sparsegen-lin and sparsehourglass, that seek to provide a control over the degree of desired sparsity. We further develop novel convex loss functions that help induce the behavior of aforementioned formulations in the multilabel classification setting, showing improved performance. We also demonstrate empirically that the proposed formulations, when used to compute attention weights, achieve better or comparable performance on standard seq2seq tasks like neural machine translation and abstractive summarization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Anirban Laha (12 papers)
  2. Saneem A. Chemmengath (2 papers)
  3. Priyanka Agrawal (15 papers)
  4. Mitesh M. Khapra (80 papers)
  5. Karthik Sankaranarayanan (22 papers)
  6. Harish G. Ramaswamy (15 papers)
Citations (58)

Summary

We haven't generated a summary for this paper yet.