Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Noise-Robust Fine-Tuning of Pretrained Language Models via External Guidance (2311.01108v1)

Published 2 Nov 2023 in cs.CL

Abstract: Adopting a two-stage paradigm of pretraining followed by fine-tuning, Pretrained LLMs (PLMs) have achieved substantial advancements in the field of natural language processing. However, in real-world scenarios, data labels are often noisy due to the complex annotation process, making it essential to develop strategies for fine-tuning PLMs with such noisy labels. To this end, we introduce an innovative approach for fine-tuning PLMs using noisy labels, which incorporates the guidance of LLMs like ChatGPT. This guidance assists in accurately distinguishing between clean and noisy samples and provides supplementary information beyond the noisy labels, thereby boosting the learning process during fine-tuning PLMs. Extensive experiments on synthetic and real-world noisy datasets further demonstrate the superior advantages of our framework over the state-of-the-art baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Song Wang (313 papers)
  2. Zhen Tan (68 papers)
  3. Ruocheng Guo (62 papers)
  4. Jundong Li (126 papers)
Citations (13)