Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Non-autoregressive Generation with Mixup Training (2110.11115v1)

Published 21 Oct 2021 in cs.CL

Abstract: While pre-trained LLMs have achieved great success on various natural language understanding tasks, how to effectively leverage them into non-autoregressive generation tasks remains a challenge. To solve this problem, we present a non-autoregressive generation model based on pre-trained transformer models. To bridge the gap between autoregressive and non-autoregressive models, we propose a simple and effective iterative training method called MIx Source and pseudo Target (MIST). Unlike other iterative decoding methods, which sacrifice the inference speed to achieve better performance based on multiple decoding iterations, MIST works in the training stage and has no effect on inference time. Our experiments on three generation benchmarks including question generation, summarization and paraphrase generation, show that the proposed framework achieves the new state-of-the-art results for fully non-autoregressive models. We also demonstrate that our method can be used to a variety of pre-trained models. For instance, MIST based on the small pre-trained model also obtains comparable performance with seq2seq models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Ting Jiang (28 papers)
  2. Shaohan Huang (79 papers)
  3. Zihan Zhang (121 papers)
  4. Deqing Wang (36 papers)
  5. Fuzhen Zhuang (97 papers)
  6. Furu Wei (291 papers)
  7. Haizhen Huang (18 papers)
  8. Liangjie Zhang (7 papers)
  9. Qi Zhang (785 papers)
Citations (8)