Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leveraging Adversarial Training in Self-Learning for Cross-Lingual Text Classification (2007.15072v1)

Published 29 Jul 2020 in cs.CL

Abstract: In cross-lingual text classification, one seeks to exploit labeled data from one language to train a text classification model that can then be applied to a completely different language. Recent multilingual representation models have made it much easier to achieve this. Still, there may still be subtle differences between languages that are neglected when doing so. To address this, we present a semi-supervised adversarial training process that minimizes the maximal loss for label-preserving input perturbations. The resulting model then serves as a teacher to induce labels for unlabeled target language samples that can be used during further adversarial training, allowing us to gradually adapt our model to the target language. Compared with a number of strong baselines, we observe significant gains in effectiveness on document and intent classification for a diverse set of languages.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xin Dong (90 papers)
  2. Yaxin Zhu (6 papers)
  3. Yupeng Zhang (25 papers)
  4. Zuohui Fu (28 papers)
  5. Dongkuan Xu (43 papers)
  6. Sen Yang (191 papers)
  7. Gerard de Melo (78 papers)
Citations (30)

Summary

We haven't generated a summary for this paper yet.