The geometry of efficient codes: how rate-distortion trade-offs distort the latent representations of generative models (2406.07269v2)
Abstract: Living organisms rely on internal models of the world to act adaptively. These models, because of resource limitations, cannot encode every detail and hence need to compress information. From a cognitive standpoint, information compression can manifest as a distortion of latent representations, resulting in the emergence of representations that may not accurately reflect the external world or its geometry. Rate-distortion theory formalizes the optimal way to compress information while minimizing such distortions, by considering factors such as capacity limitations, the frequency and the utility of stimuli. However, while this theory explains why the above factors distort latent representations, it does not specify which specific distortions they produce. To address this question, here we investigate how rate-distortion trade-offs shape the latent representations of images in generative models, specifically Beta Variational Autoencoders ($\beta$-VAEs), under varying constraints of model capacity, data distributions, and task objectives. By systematically exploring these factors, we identify three primary distortions in latent representations: prototypization, specialization, and orthogonalization. These distortions emerge as signatures of information compression, reflecting the model's adaptation to capacity limitations, data imbalances, and task demands. Additionally, our findings demonstrate that these distortions can coexist, giving rise to a rich landscape of latent spaces, whose geometry could differ significantly across generative models subject to different constraints. Our findings contribute to explain how the normative constraints of rate-distortion theory shape the geometry of latent representations of generative models of artificial systems and living organisms.
Collections
Sign up for free to add this paper to one or more collections.