Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LightXML: Transformer with Dynamic Negative Sampling for High-Performance Extreme Multi-label Text Classification (2101.03305v1)

Published 9 Jan 2021 in cs.CL and cs.LG

Abstract: Extreme Multi-label text Classification (XMC) is a task of finding the most relevant labels from a large label set. Nowadays deep learning-based methods have shown significant success in XMC. However, the existing methods (e.g., AttentionXML and X-Transformer etc) still suffer from 1) combining several models to train and predict for one dataset, and 2) sampling negative labels statically during the process of training label ranking model, which reduces both the efficiency and accuracy of the model. To address the above problems, we proposed LightXML, which adopts end-to-end training and dynamic negative labels sampling. In LightXML, we use generative cooperative networks to recall and rank labels, in which label recalling part generates negative and positive labels, and label ranking part distinguishes positive labels from these labels. Through these networks, negative labels are sampled dynamically during label ranking part training by feeding with the same text representation. Extensive experiments show that LightXML outperforms state-of-the-art methods in five extreme multi-label datasets with much smaller model size and lower computational complexity. In particular, on the Amazon dataset with 670K labels, LightXML can reduce the model size up to 72% compared to AttentionXML.

Citations (129)

Summary

  • The paper presents LightXML, featuring dynamic negative sampling within a transformer architecture to significantly boost performance in extreme multi-label text classification.
  • The model overcomes limitations of prior approaches by reducing model size and computational complexity while maintaining high precision metrics on large-scale datasets such as Amazon-670K.
  • Empirical results demonstrate that LightXML achieves state-of-the-art performance with up to 72% model size reduction compared to existing XMC methods.

A Technical Overview of LightXML for Extreme Multi-label Text Classification

The paper "LightXML: Transformer with Dynamic Negative Sampling for High-Performance Extreme Multi-label Text Classification" introduces an innovative approach to Extreme Multi-label Text Classification (XMC), a significant task in NLP aiming to identify relevant labels from a vast set of possibilities. This paper critically advances the field by leveraging transformers with dynamic negative sampling, optimizing both computational efficiency and model performance.

Methodological Advances

The LightXML model addresses several limitations inherent in existing XMC approaches, such as AttentionXML and X-Transformer. Traditional methodologies often employ multiple models for training and predicting, leading to inefficiencies and increased computational overhead. Furthermore, these approaches typically use static negative sampling, which hinders model accuracy by limiting the dynamic range of learning.

In contrast, LightXML proposes a compact and efficient framework by incorporating generative cooperative networks that utilize transformers in an end-to-end training fashion. This method dynamically samples negative labels during training, allowing the model to gradually learn from simple to complex cases without overfitting. LightXML aims to balance accuracy with computational demands by embracing smaller model sizes and reduced complexity.

Empirical Validation

The efficacy of LightXML is demonstrated through extensive experiments on five large-scale XMC datasets: Eurlex-4K, Wiki10-31K, AmazonCat-13K, Wiki-500K, and Amazon-670K. Notably, LightXML achieves superior performance across these datasets with significant reductions in model size and computational complexity. For instance, on Amazon-670K, LightXML reduces model size by up to 72% compared to AttentionXML while still outperforming other state-of-the-art methods.

In terms of precision metrics P@kP@k, LightXML consistently demonstrates improvements over existing models, attesting to the effectiveness of dynamic negative sampling and the robustness of the transformer-based framework.

Implications and Future Research

The practical implications of LightXML are profound, particularly in environments with resource constraints where efficient and scalable models are crucial. The model's ability to operate end-to-end on large-scale, complex datasets represents a substantial step forward in making XMC more accessible for real-world applications.

Theoretically, the integration of dynamic negative sampling provides a promising avenue to explore how models can better navigate large label spaces with improved learning dynamics. Future research might delve into optimizing negative sampling strategies even further and exploring alternative architectures that can benefit from LightXML's approach.

Additionally, given the increasing application of transformers across diverse NLP tasks, subsequent studies could assess the generalizability of the generative cooperative networks beyond XMC.

Overall, LightXML emerges as a potent contribution to the field of extreme multi-label classification, offering a more efficient pathway to harnessing transformer-based architectures for complex labeling tasks.

Youtube Logo Streamline Icon: https://streamlinehq.com