Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Masked Language Modeling to Translation: Non-English Auxiliary Tasks Improve Zero-shot Spoken Language Understanding (2105.07316v1)

Published 15 May 2021 in cs.CL

Abstract: The lack of publicly available evaluation data for low-resource languages limits progress in Spoken Language Understanding (SLU). As key tasks like intent classification and slot filling require abundant training data, it is desirable to reuse existing data in high-resource languages to develop models for low-resource scenarios. We introduce xSID, a new benchmark for cross-lingual Slot and Intent Detection in 13 languages from 6 language families, including a very low-resource dialect. To tackle the challenge, we propose a joint learning approach, with English SLU training data and non-English auxiliary tasks from raw text, syntax and translation for transfer. We study two setups which differ by type and language coverage of the pre-trained embeddings. Our results show that jointly learning the main tasks with masked LLMing is effective for slots, while machine translation transfer works best for intent classification.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Rob van der Goot (38 papers)
  2. Ibrahim Sharaf (2 papers)
  3. Aizhan Imankulova (6 papers)
  4. Ahmet Üstün (38 papers)
  5. Marija Stepanović (2 papers)
  6. Alan Ramponi (8 papers)
  7. Siti Oryza Khairunnisa (1 paper)
  8. Mamoru Komachi (40 papers)
  9. Barbara Plank (130 papers)
Citations (43)

Summary

We haven't generated a summary for this paper yet.