Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Teaching Language Models to Hallucinate Less with Synthetic Tasks (2310.06827v3)

Published 10 Oct 2023 in cs.CL and cs.LG
Teaching Language Models to Hallucinate Less with Synthetic Tasks

Abstract: LLMs frequently hallucinate on abstractive summarization tasks such as document-based question-answering, meeting summarization, and clinical report generation, even though all necessary information is included in context. However, optimizing LLMs to hallucinate less on these tasks is challenging, as hallucination is hard to efficiently evaluate at each optimization step. In this work, we show that reducing hallucination on a synthetic task can also reduce hallucination on real-world downstream tasks. Our method, SynTra, first designs a synthetic task where hallucinations are easy to elicit and measure. It next optimizes the LLM's system message via prefix-tuning on the synthetic task, and finally transfers the system message to realistic, hard-to-optimize tasks. Across three realistic abstractive summarization tasks, SynTra reduces hallucination for two 13B-parameter LLMs using only a synthetic retrieval task for supervision. We also find that optimizing the system message rather than the model weights can be critical; fine-tuning the entire model on the synthetic task can counterintuitively increase hallucination. Overall, SynTra demonstrates that the extra flexibility of working with synthetic data can help mitigate undesired behaviors in practice.

Analysis of "Teaching LLMs to Hallucinate Less with Synthetic Tasks"

The paper presents a novel method termed SynTra, aimed at reducing hallucination in LLMs known for generating fabricated content during tasks like abstractive summarization. Hallucination critically undermines the utility of LLMs when deployed in real-world applications such as document-based QA, meeting summarization, and clinical report generation. Direct mitigation of hallucinations proves challenging due to the inefficiency of evaluating hallucination-related loss during typical optimization processes.

Core Proposal

SynTra is built on the premise that optimizing a model on synthetic tasks where hallucination is easily observable can indirectly reduce hallucinations on real-world tasks. The methodology includes creating specific synthetic tasks where hallucination can be elicited and measured efficiently, optimizing the LLM's system message via prefix-tuning on these tasks, and subsequently transferring the optimized message to more intricate real-world tasks.

Methodological Details

  1. Synthetic Task Design: The pivotal component of the SynTra approach is the construction of a synthetic task, the 'names retrieval task', where LLMs are provided with a list of random names and prompted to retrieve specific names based on conditions. This enables clear identification of hallucinated content by checking the generated names against the list, thus allowing precise optimization.
  2. Optimization via Prefix-tuning: The paper identifies the optimization of the system message rather than the LLM's weights as crucial for reducing hallucination. The system message is equipped with a continuous postfix through prefix-tuning to encode high-level behaviors that discourage hallucination.
  3. Evaluation and Transferability Assessment: The paper evaluates SynTra using Vicuna v1.1 and Orca on tasks like search-and-retrieval, meeting summarization, and clinical reporting. They leverage various metrics, including GPT-4 for hallucination scoring. Results show a significant reduction in hallucination rates, over 7 percentage points on average for some models, without substantial drift from reference summaries as evidenced by overlapping BLEU and ROUGE scores.

Results Interpretation

The empirical results underscore SynTra's effectiveness, especially when optimizing the system message coupled with using reference data—further reducing unintended biases from synthetic task-specific features. Fine-tuning weights indiscriminately across tasks is shown to be less effective or even counterproductive, reinforcing the merit of focusing on system messages.

Implications and Future Work

Practically, SynTra suggests that strategic task and data design can crucially influence LLM adaptability and performance beyond conventional methods, offering an efficient alternative where direct human annotation feedback is cost-prohibitive. Theoretically, this work hints at the potential of learning abstracted behaviors in LLMs that transfer across diverse applications, echoing concepts like representational robustness and task generalization.

Future research could explore refining this method across various models and tasks, expanding synthetic task design beyond tailored or narrow scopes, and potentially enabling this approach in models available only through commercial APIs or with larger LLMs. Additionally, understanding the intrinsic properties of synthetic tasks that guarantee effective learning transfer remains an open challenge. This can guide broader applications in machine learning tasks, potentially mimicking human-like generalization capabilities.

This paper's insights exemplify a shift towards identifying task-agnostic methods within the AI landscape, balancing between synthetic efficiency and practical utility.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Erik Jones (15 papers)
  2. Hamid Palangi (52 papers)
  3. Varun Chandrasekaran (39 papers)
  4. Subhabrata Mukherjee (59 papers)
  5. Arindam Mitra (40 papers)
  6. Ahmed Awadallah (27 papers)
  7. Ece Kamar (37 papers)
  8. Clarisse Simões (1 paper)
Citations (20)
X Twitter Logo Streamline Icon: https://streamlinehq.com