2000 character limit reached
Increasing transformer token length with a Maximum Entropy Principle Method (2408.10277v1)
Published 17 Aug 2024 in cs.LG
Abstract: Transformers suffer from the computational overhead of their quadratic dependence on the length of sequences processed. We present three methods, all adding an intermediate step between training and inference/generation, which extend the autoregressive length of transformers. All rely on a Maximum Entropy Principle (MEP) whereby entropy is maximized in the presence of suitable constraints, accounted for by use of Lagrange Multipliers. These constraint methods extend the autoregressive character from T to 2T tokens in a linear-with-T fashion. There is overhead associated with this added step, but they should still be faster than the standard methods.
Collections
Sign up for free to add this paper to one or more collections.