Representation Learning by Ranking across multiple tasks (2103.15093v2)
Abstract: In recent years, representation learning has become the research focus of the machine learning community. Large-scale neural networks are a crucial step toward achieving general intelligence, with their success largely attributed to their ability to learn abstract representations of data. Several learning fields are actively discussing how to learn representations, yet there is a lack of a unified perspective. We convert the representation learning problem under different tasks into a ranking problem. By adopting the ranking problem as a unified perspective, representation learning tasks can be solved in a unified manner by optimizing the ranking loss. Experiments under various learning tasks, such as classification, retrieval, multi-label learning, and regression, prove the superiority of the representation learning by ranking framework. Furthermore, experiments under self-supervised learning tasks demonstrate the significant advantage of the ranking framework in processing unsupervised training data, with data augmentation techniques further enhancing its performance.