Sufficient and effective representation for downstream tasks
Determine a self-supervised learning representation that is both sufficient and effective across a variety of downstream tasks, where "sufficient" means downstream tasks can be completed by composing functions only on the learned representations rather than on the original data, and "effective" means the composed functions for downstream tasks are lightweight models.
References
Concretely, despite the empirical successes achieved by representation from SSL, there are essential research questions have yet to be resolved, i.e, What representation is sufficient and effective for variety of downstream tasks? How can such a representation learned in an efficient and scalable way?
— Spectral Ghost in Representation Learning: from Component Analysis to Self-Supervised Learning
(2601.20154 - Dai et al., 28 Jan 2026) in Section 1 (Introduction)