Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Semantic Label Smoothing for Sequence to Sequence Problems (2010.07447v1)

Published 15 Oct 2020 in cs.CL and cs.LG

Abstract: Label smoothing has been shown to be an effective regularization strategy in classification, that prevents overfitting and helps in label de-noising. However, extending such methods directly to seq2seq settings, such as Machine Translation, is challenging: the large target output space of such problems makes it intractable to apply label smoothing over all possible outputs. Most existing approaches for seq2seq settings either do token level smoothing, or smooth over sequences generated by randomly substituting tokens in the target sequence. Unlike these works, in this paper, we propose a technique that smooths over \emph{well formed} relevant sequences that not only have sufficient n-gram overlap with the target sequence, but are also \emph{semantically similar}. Our method shows a consistent and significant improvement over the state-of-the-art techniques on different datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Michal Lukasik (23 papers)
  2. Himanshu Jain (19 papers)
  3. Aditya Krishna Menon (56 papers)
  4. Seungyeon Kim (22 papers)
  5. Srinadh Bhojanapalli (44 papers)
  6. Felix Yu (62 papers)
  7. Sanjiv Kumar (123 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.