Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Models (2411.02433v2)

Published 1 Nov 2024 in cs.CL, cs.AI, and stat.ML

Abstract: LLMs have demonstrated remarkable capabilities, but their outputs can sometimes be unreliable or factually incorrect. To address this, we introduce Self Logits Evolution Decoding (SLED), a novel decoding framework that enhances the truthfulness of LLMs without relying on external knowledge bases or requiring further fine-tuning. From an optimization perspective, our SLED framework leverages the latent knowledge embedded within the LLM by contrasting the output logits from the final layer with those from early layers. It then utilizes an approximate gradient approach to enable latent knowledge to guide the self-refinement of outputs, thereby effectively improving factual accuracy. Extensive experiments have been conducted on established benchmarks across a diverse range of model families (LLaMA 2, LLaMA 3, Gemma) and scales (from 2B to 70B), including more advanced architectural configurations such as the mixture of experts (MoE). Our evaluation spans a wide variety of tasks, including multi-choice, open-generation, and adaptations to chain-of-thought reasoning tasks. The results demonstrate that SLED consistently improves factual accuracy by up to 20\% compared to existing decoding methods while maintaining natural language fluency and negligible latency overhead. Furthermore, it can be flexibly combined with other decoding methods to further enhance their performance.

Self Logits Evolution Decoding for Enhancing Factuality in LLMs

The manuscript introduces Self Logits Evolution Decoding (SLED), a novel approach designed to elevate the factual accuracy of LLMs by manipulating their inherent latent knowledge. SLED addresses the critical challenge of hallucinations in LLM-generated content, which undermines the reliability of these models in information-sensitive tasks. This methodology provides an optimized, inference-time solution that forgoes the need for additional data or external knowledge integration, setting a precedent for refining model outputs.

Methodological Insights

SLED is predicated on leveraging the contrastive analysis of output logits from different stages of LLM processing (early versus final layers). The divergence observed through this layer-wise analysis sheds light on the latent knowledge embedded in the model, which often remains untapped. SLED exploits this latent knowledge by employing an approximate gradient descent approach, refining the final logits to approximate a factual distribution more closely. Consequently, this adjustment is hypothesized to reconcile the predicted outputs with factual truth without the external support or retraining dependencies observed in other methods.

Experiments and Results

The experimental framework rigorously evaluates SLED across multiple state-of-the-art LLM architectures, including models like LLaMA 2 and LLaMA 3, spanning parameters from 2 billion to 70 billion. The experimentation encompassed diverse benchmarks, such as TruthfulQA and FACTOR, and varied task types from open-ended generation to chain-of-thought reasoning. The results are compelling, showing that SLED enhances factual accuracy by up to 20% over conventional methods while sustaining natural language fluency and exhibiting minimal latency overhead. This balance of factual integrity with operational efficiency marks a significant contribution to the field.

Theoretical and Practical Implications

Theoretically, SLED enriches the understanding of layer-wise dynamics in LLMs. The approach elucidates the potential to harness latent knowledge without necessitating extrinsic datasets or cumbersome retraining processes, pointing to possibilities for enhanced model introspection techniques and insight into model training inefficiencies. Practically, SLED offers a straightforward, integrable solution for improving LLM output factuality, which holds promise for deploying LLMs in critical sectors where accuracy is paramount.

Future Developments and Integration

Looking forward, SLED presents opportunities for integration with supervised techniques and other non-invasive refinement strategies, potentially amplifying its impact across a widening array of AI applications. Considering the rapid evolution of LLM capabilities, SLED's methodology could inspire further exploration into fine-grained model adjustments and innovative retrofit solutions that ensure both precision and agility in model output quality enhancements.

In sum, Self Logits Evolution Decoding stands as a promising methodological advancement in the pursuit of truthfulness in AI-generated content, positioning LLMs for more reliable deployment across information-critical domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jianyi Zhang (39 papers)
  2. Da-Cheng Juan (38 papers)
  3. Cyrus Rashtchian (31 papers)
  4. Chun-Sung Ferng (8 papers)
  5. Heinrich Jiang (32 papers)
  6. Yiran Chen (176 papers)
Citations (2)