Papers
Topics
Authors
Recent
2000 character limit reached

Low-Rank Softmax Can Have Unargmaxable Classes in Theory but Rarely in Practice

Published 12 Mar 2022 in cs.LG and cs.CL | (2203.06462v2)

Abstract: Classifiers in NLP often have a large number of output classes. For example, neural LMs and machine translation (MT) models both predict tokens from a vocabulary of thousands. The Softmax output layer of these models typically receives as input a dense feature representation, which has much lower dimensionality than the output. In theory, the result is some words may be impossible to be predicted via argmax, irrespective of input features, and empirically, there is evidence this happens in small LLMs. In this paper we ask whether it can happen in practical LLMs and translation models. To do so, we develop algorithms to detect such \emph{unargmaxable} tokens in public models. We find that 13 out of 150 models do indeed have such tokens; however, they are very infrequent and unlikely to impact model quality. We release our code so that others can inspect their models.

Citations (9)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.