Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Paragraph-level Rationale Extraction through Regularization: A case study on European Court of Human Rights Cases (2103.13084v1)

Published 24 Mar 2021 in cs.CL

Abstract: Interpretability or explainability is an emerging research field in NLP. From a user-centric point of view, the goal is to build models that provide proper justification for their decisions, similar to those of humans, by requiring the models to satisfy additional constraints. To this end, we introduce a new application on legal text where, contrary to mainstream literature targeting word-level rationales, we conceive rationales as selected paragraphs in multi-paragraph structured court cases. We also release a new dataset comprising European Court of Human Rights cases, including annotations for paragraph-level rationales. We use this dataset to study the effect of already proposed rationale constraints, i.e., sparsity, continuity, and comprehensiveness, formulated as regularizers. Our findings indicate that some of these constraints are not beneficial in paragraph-level rationale extraction, while others need re-formulation to better handle the multi-label nature of the task we consider. We also introduce a new constraint, singularity, which further improves the quality of rationales, even compared with noisy rationale supervision. Experimental results indicate that the newly introduced task is very challenging and there is a large scope for further research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ilias Chalkidis (40 papers)
  2. Manos Fergadiotis (12 papers)
  3. Dimitrios Tsarapatsanis (2 papers)
  4. Nikolaos Aletras (72 papers)
  5. Ion Androutsopoulos (51 papers)
  6. Prodromos Malakasiotis (22 papers)
Citations (102)

Summary

We haven't generated a summary for this paper yet.