Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LINDA: Unsupervised Learning to Interpolate in Natural Language Processing (2112.13969v1)

Published 28 Dec 2021 in cs.CL and cs.LG

Abstract: Despite the success of mixup in data augmentation, its applicability to NLP tasks has been limited due to the discrete and variable-length nature of natural languages. Recent studies have thus relied on domain-specific heuristics and manually crafted resources, such as dictionaries, in order to apply mixup in NLP. In this paper, we instead propose an unsupervised learning approach to text interpolation for the purpose of data augmentation, to which we refer as "Learning to INterpolate for Data Augmentation" (LINDA), that does not require any heuristics nor manually crafted resources but learns to interpolate between any pair of natural language sentences over a natural language manifold. After empirically demonstrating the LINDA's interpolation capability, we show that LINDA indeed allows us to seamlessly apply mixup in NLP and leads to better generalization in text classification both in-domain and out-of-domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yekyung Kim (8 papers)
  2. Seohyeong Jeong (4 papers)
  3. Kyunghyun Cho (292 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.