Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UCSC at SemEval-2025 Task 3: Context, Models and Prompt Optimization for Automated Hallucination Detection in LLM Output (2505.03030v1)

Published 5 May 2025 in cs.CL

Abstract: Hallucinations pose a significant challenge for LLMs when answering knowledge-intensive queries. As LLMs become more widely adopted, it is crucial not only to detect if hallucinations occur but also to pinpoint exactly where in the LLM output they occur. SemEval 2025 Task 3, Mu-SHROOM: Multilingual Shared-task on Hallucinations and Related Observable Overgeneration Mistakes, is a recent effort in this direction. This paper describes the UCSC system submission to the shared Mu-SHROOM task. We introduce a framework that first retrieves relevant context, next identifies false content from the answer, and finally maps them back to spans in the LLM output. The process is further enhanced by automatically optimizing prompts. Our system achieves the highest overall performance, ranking #1 in average position across all languages. We release our code and experiment results.

Context, Models and Prompt Optimization for Automated Hallucination Detection in LLM Output

The paper "UCSC at SemEval-2025 Task 3: Context, Models and Prompt Optimization for Automated Hallucination Detection in LLM Output" outlines the development of a sophisticated framework aimed at addressing the challenge of detecting hallucinated content in the outputs of LLMs. Hallucinations, in this context, refer to instances where models generate information that is false or unverifiable, which poses significant concerns within the field due to their impact on the reliability and trustworthiness of LLMs in knowledge-intensive tasks.

Framework for Hallucination Detection

The authors introduce a multi-stage framework designed specifically for Task 3 of the SemEval 2025 Mu-SHROOM challenge, which required participants to identify hallucinated spans across multilingual outputs. The proposed system involves three key stages:

  1. Context Retrieval: This stage involves gathering relevant information from external sources to provide a factual basis for verifying model outputs. The retrieval of context is executed by querying with either the question or claims found within the model-generated response, allowing cross-verification of content.
  2. Hallucinated Content Detection: Various methods are explored for detecting false or unverifiable content, including direct text extraction and verification against structured knowledge graphs. A distinctive method, named Minimal Cost Revision, employs reasoning models to minimally adjust the generated answer, highlighting hallucinated segments through observed discrepancies.
  3. Span Mapping: Identified hallucinations are mapped back to text spans at the character level using approaches such as substring match and edit-distance calculations.

Optimization Strategy

To enhance detection accuracy, prompt optimization is carried out using the MiPROv2 framework. This involves exploring different prompt configurations through Bayesian search to identify those that maximize relevant task performance metrics, specifically Intersection of Union (IoU) and Spearman correlation (Corr).

Results

The UCSC team reports that their system performed successfully across multiple languages, securing top rankings in performance metrics on the Mu-SHROOM task. Notably, the framework provides high Intersections of Union scores, demonstrating superior accuracy in span-level hallucination detection compared to competing models, particularly across English and other major European languages. System combination strategies further improved correlation scores by aggregating inputs from multiple variant systems, yielding a composite prediction that correlates more closely with human annotations.

Implications for Future Research

Findings from this research underline the importance of grounding LLM outputs in reliable contexts to mitigate hallucinations effectively. Furthermore, optimizing the detection process through prompt adjustments and leveraging reasoning capabilities can significantly enhance system performance. This offers promising avenues for further advancement within AI models tasked with producing factual and reliable output across diverse linguistic and application domains.

Conclusion

The paper thus presents a robust framework for hallucination detection that not only advances technical methodologies for identifying and mapping hallucinated content but also adapts strategically to multilingual scenarios. As LLM applications continue to grow, continued investigations into context-aware and optimization-enhanced hallucination detection systems will undoubtedly play a critical role in ensuring their factual reliability and broader acceptance in real-world applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Sicong Huang (12 papers)
  2. Jincheng He (5 papers)
  3. Shiyuan Huang (17 papers)
  4. Karthik Raja Anandan (1 paper)
  5. Arkajyoti Chakraborty (5 papers)
  6. Ian Lane (29 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com