Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stolen Probability: A Structural Weakness of Neural Language Models (2005.02433v1)

Published 5 May 2020 in cs.LG and stat.ML

Abstract: Neural Network LLMs (NNLMs) generate probability distributions by applying a softmax function to a distance metric formed by taking the dot product of a prediction vector with all word vectors in a high-dimensional embedding space. The dot-product distance metric forms part of the inductive bias of NNLMs. Although NNLMs optimize well with this inductive bias, we show that this results in a sub-optimal ordering of the embedding space that structurally impoverishes some words at the expense of others when assigning probability. We present numerical, theoretical and empirical analyses showing that words on the interior of the convex hull in the embedding space have their probability bounded by the probabilities of the words on the hull.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. David Demeter (5 papers)
  2. Gregory Kimmel (3 papers)
  3. Doug Downey (50 papers)
Citations (32)

Summary

We haven't generated a summary for this paper yet.