Papers
Topics
Authors
Recent
2000 character limit reached

Legal$Δ$: Enhancing Legal Reasoning in LLMs via Reinforcement Learning with Chain-of-Thought Guided Information Gain

Published 17 Aug 2025 in cs.CL | (2508.12281v2)

Abstract: Legal Artificial Intelligence (LegalAI) has achieved notable advances in automating judicial decision-making with the support of LLMs. However, existing legal LLMs still struggle to generate reliable and interpretable reasoning processes. They often default to fast-thinking behavior by producing direct answers without explicit multi-step reasoning, limiting their effectiveness in complex legal scenarios that demand rigorous justification. To address this challenge, we propose Legal$\Delta$, a reinforcement learning framework designed to enhance legal reasoning through chain-of-thought guided information gain. During training, Legal$\Delta$ employs a dual-mode input setup-comprising direct answer and reasoning-augmented modes-and maximizes the information gain between them. This encourages the model to acquire meaningful reasoning patterns rather than generating superficial or redundant explanations. Legal$\Delta$ follows a two-stage approach: (1) distilling latent reasoning capabilities from a powerful Large Reasoning Model (LRM), DeepSeek-R1, and (2) refining reasoning quality via differential comparisons, combined with a multidimensional reward mechanism that assesses both structural coherence and legal-domain specificity. Experimental results on multiple legal reasoning tasks demonstrate that Legal$\Delta$ outperforms strong baselines in both accuracy and interpretability. It consistently produces more robust and trustworthy legal judgments without relying on labeled preference data. All code and data will be released at https://github.com/NEUIR/LegalDelta.

Summary

  • The paper demonstrates the integration of Chain-of-Thought guided information gain into reinforcement learning to enhance legal reasoning capabilities in LLMs.
  • It leverages latent reasoning distillation and differential Q-value analysis to ensure structured and legally specific reasoning outputs.
  • Experimental results on datasets like CAIL2018 show a 10% performance boost with improved interpretability and stability on complex legal tasks.

Introduction

The paper introduces LegalΔ\Delta, a reinforcement learning framework designed to bolster the legal reasoning capabilities of LLMs. Specifically, the framework integrates a novel mechanism called Chain-of-Thought guided Information Gain into the Reinforcement Learning with Verifiable Rewards (RLVR) paradigm. This integration encourages LLMs to generate not only answers but also structured, interpretable reasoning processes that are crucial for handling complex legal scenarios.

Methodology

LegalΔ\Delta incorporates a dual-mode input setup—comprising direct answer and reasoning-augmented modes—to maximize information gain. This setup encourages the model to explore diverse reasoning trajectories, fostering the extraction of meaningful reasoning patterns. The core methodology involves two stages:

  1. Latent Reasoning Distillation: Utilizing a large reasoning model, DeepSeek-R1, the framework distills latent reasoning capabilities into the LLMs.
  2. Differential Comparison and Multidimensional Reward: The system refines reasoning quality through differential comparisons combined with rewards assessing structural coherence and legal-domain specificity. Figure 1

    Figure 1: Illustration of LegalDelta. We present how chain-of-thought guided information gain works in RLVR.

Information Gain-Enhanced Reward

The reward mechanism is a critical aspect of LegalΔ\Delta. It leverages the analogy between model logits and Q-values in reinforcement learning to quantify information gain. This involves evaluating pointwise mutual information and global confidence shifts through:

  • Logit Analysis: Monitoring the model's confidence through average logit values during Chain-of-Thought prompting.
  • Differential Q-value Analysis: Comparing the differential gain in Q-values from reasoning prompts to measure semantic shifts contributing to reasoning quality.

Experimental Setup

The study evaluates LegalΔ\Delta across a range of legal tasks, demonstrating notable improvements over baseline models in both accuracy and interpretability. These tasks, drawn from datasets like CAIL2018 and JEC_QA, include legal article prediction, criminal charge prediction, and case analysis, among others. The framework shows a 10% performance boost across various LLM scales without the need for labeled preference data. Figure 2

Figure 2

Figure 2: Logit Variations Induced by Information Gain Module During Training Process.

Results and Discussion

LegalΔ\Delta consistently outperforms baseline models, exhibiting strong generalization to out-of-domain tasks as well. The information gain module, specifically, contributes significantly to confidence improvements, training efficiency, and stability. The RL component, which utilizes Group Relative Policy Optimization (GRPO), proves effective in distinguishing high-quality reasoning. Figure 3

Figure 3

Figure 3: Analysis of Information Gain-Enhanced Reward Modeling Process.

Conclusion

LegalΔ\Delta establishes a robust framework for enhancing legal reasoning in LLMs through a sophisticated integration of reinforcement learning and information gain techniques. By fostering an environment where models explore and develop robust reasoning strategies, LegalΔ\Delta significantly elevates the capabilities of LLMs in the legal domain. Future research may expand on these techniques to further explore reinforcement learning's role in other domains requiring complex, multi-step reasoning.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

GitHub