Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Constructing Contrastive samples via Summarization for Text Classification with limited annotations (2104.05094v3)

Published 11 Apr 2021 in cs.CL and cs.LG

Abstract: Contrastive Learning has emerged as a powerful representation learning method and facilitates various downstream tasks especially when supervised data is limited. How to construct efficient contrastive samples through data augmentation is key to its success. Unlike vision tasks, the data augmentation method for contrastive learning has not been investigated sufficiently in language tasks. In this paper, we propose a novel approach to construct contrastive samples for language tasks using text summarization. We use these samples for supervised contrastive learning to gain better text representations which greatly benefit text classification tasks with limited annotations. To further improve the method, we mix up samples from different classes and add an extra regularization, named Mixsum, in addition to the cross-entropy-loss. Experiments on real-world text classification datasets (Amazon-5, Yelp-5, AG News, and IMDb) demonstrate the effectiveness of the proposed contrastive learning framework with summarization-based data augmentation and Mixsum regularization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yangkai Du (8 papers)
  2. Tengfei Ma (73 papers)
  3. Lingfei Wu (135 papers)
  4. Fangli Xu (17 papers)
  5. Xuhong Zhang (61 papers)
  6. Bo Long (60 papers)
  7. Shouling Ji (136 papers)
Citations (10)