Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rethinking Exposure Bias In Language Modeling (1910.11235v2)

Published 13 Oct 2019 in cs.CL and cs.LG

Abstract: Exposure bias describes the phenomenon that a LLM trained under the teacher forcing schema may perform poorly at the inference stage when its predictions are conditioned on its previous predictions unseen from the training corpus. Recently, several generative adversarial networks (GANs) and reinforcement learning (RL) methods have been introduced to alleviate this problem. Nonetheless, a common issue in RL and GANs training is the sparsity of reward signals. In this paper, we adopt two simple strategies, multi-range reinforcing, and multi-entropy sampling, to amplify and denoise the reward signal. Our model produces an improvement over competing models with regards to BLEU scores and road exam, a new metric we designed to measure the robustness against exposure bias in LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yifan Xu (92 papers)
  2. Kening Zhang (2 papers)
  3. Haoyu Dong (55 papers)
  4. Yuezhou Sun (2 papers)
  5. Wenlong Zhao (18 papers)
  6. Zhuowen Tu (80 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.