Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Logic Rules as Explanations for Legal Case Retrieval (2403.01457v1)

Published 3 Mar 2024 in cs.IR and cs.CL

Abstract: In this paper, we address the issue of using logic rules to explain the results from legal case retrieval. The task is critical to legal case retrieval because the users (e.g., lawyers or judges) are highly specialized and require the system to provide logical, faithful, and interpretable explanations before making legal decisions. Recently, research efforts have been made to learn explainable legal case retrieval models. However, these methods usually select rationales (key sentences) from the legal cases as explanations, failing to provide faithful and logically correct explanations. In this paper, we propose Neural-Symbolic enhanced Legal Case Retrieval (NS-LCR), a framework that explicitly conducts reasoning on the matching of legal cases through learning case-level and law-level logic rules. The learned rules are then integrated into the retrieval process in a neuro-symbolic manner. Benefiting from the logic and interpretable nature of the logic rules, NS-LCR is equipped with built-in faithful explainability. We also show that NS-LCR is a model-agnostic framework that can be plugged in for multiple legal retrieval models. To showcase NS-LCR's superiority, we enhance existing benchmarks by adding manually annotated logic rules and introducing a novel explainability metric using LLMs. Our comprehensive experiments reveal NS-LCR's effectiveness for ranking, alongside its proficiency in delivering reliable explanations for legal case retrieval.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (72)
  1. David Alvarez Melis and Tommi Jaakkola. 2018. Towards robust interpretability with self-explaining neural networks. Advances in neural information processing systems, 31.
  2. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity.
  3. Entropy-based logic explanations of neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 6046–6054.
  4. A history of ai and law in 50 papers: 25 years of the international conference on ai and law. Artificial Intelligence and Law, 20:215–319.
  5. Paheli Bhattacharya and Ghosh et al. 2020. Methods for computing legal document similarity: A comparative study. arXiv preprint.
  6. Hier-SPCNet: A Legal Statute Hierarchy-Based Heterogeneous Network for Computing Legal Case Document Similarity, page 1657–1660. Association for Computing Machinery, New York, NY, USA.
  7. Legal requirements on explainability in machine learning. Artificial Intelligence and Law, 29(2):149–169.
  8. Language models can explain neurons in language models.
  9. Max Black. 1956. Why cannot an effect precede its cause? Analysis, 16(3):49–58.
  10. Lexnlp: Natural language processing and information extraction for legal and regulatory texts. In Research Handbook on Big Data Law, pages 216–227. Edward Elgar Publishing.
  11. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
  12. Ilias Chalkidis. 2023. Chatgpt may pass the bar exam soon, but has a long way to go for the lexglue benchmark. arXiv preprint arXiv:2304.12202.
  13. Out-of-domain semantics to the rescue! zero-shot hybrid retrieval models. In Advances in Information Retrieval: 44th European Conference on IR Research, ECIR 2022, Stavanger, Norway, April 10–14, 2022, Proceedings, Part I, pages 95–110. Springer.
  14. Logic explained networks. Artificial Intelligence, 314:103822.
  15. Reciprocal rank fusion outperforms condorcet and individual rank learning methods. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, pages 758–759.
  16. Luc De Raedt and Thomas Demeester. 2019. Neuro-symbolic= neural+ logical+ probabilistic. In NeSy’19@ IJCAI, the 14th International Workshop on Neural-Symbolic Learning and Reasoning.
  17. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  18. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320–335.
  19. Weighted fuzzy pattern matching. Fuzzy sets and systems, 28(3):313–331.
  20. Legal judgment prediction via event extraction with constraints. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 648–664, Dublin, Ireland. Association for Computational Linguistics.
  21. Neuro-symbolic natural logic with introspective revision for natural language inference. Transactions of the Association for Computational Linguistics, 10:240–256.
  22. Judgment prediction via injecting legal knowledge into neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 12866–12874.
  23. Siegfried Gottwald and Petr Hájek. 2005. Triangular norm-based mathematical fuzzy logics. In Logical, algebraic, analytic and probabilistic aspects of triangular norms, pages 275–299. Elsevier.
  24. Eleni Ilkou and Maria Koutraki. 2020. Symbolic vs sub-symbolic ai methods: Friends or enemies? In CIKM (Workshops).
  25. Extending logic explained networks to text classification. In Empirical Methods in Natural Language Processing.
  26. Sarthak Jain and Byron C Wallace. 2019. Attention is not explanation. In Proceedings of NAACL-HLT, pages 3543–3556.
  27. A review on big data with machine learning and fuzzy logic for better decision making. International Journal of Scientific & Technology Research, 8(10):1121–1125.
  28. Lnn-el: A neuro-symbolic approach to short-text entity linking. arXiv preprint arXiv:2106.09795.
  29. Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171–4186.
  30. Diederik P Kingma and Jimmy Ba Adam. 2015. A method for stochastic. Optimization. In, ICLR, 5.
  31. Triangular norms, volume 8. Springer Science & Business Media.
  32. Similarity analysis of legal judgments. In Proceedings of the 4th Bangalore Annual Compute Conference, Compute 2011, Bangalore, India, March 25-26, 2011, page 17. ACM.
  33. Internet-augmented language models through few-shot prompting for open-domain question answering. arXiv preprint arXiv:2203.05115.
  34. Self-explaining deep models with logic rule reasoning. In 36th Annual Conference on Neural Information Processing Systems (NeurIPS 2022). Neural Information Processing Systems (NeurIPS).
  35. Tao Li and Vivek Srikumar. 2019. Augmenting neural networks with first-order logic. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 292–302.
  36. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110.
  37. Everything has a cause: Leveraging causal inference in legal text analysis. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1928–1941.
  38. Lecard: a legal case retrieval dataset for chinese law system. In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval, pages 2342–2348.
  39. Deepproblog: Neural probabilistic logic programming. advances in neural information processing systems, 31.
  40. Augmented language models: a survey. arXiv preprint arXiv:2302.07842.
  41. Finding relevant indian judgments using dispersion of citation network. In Proceedings of the 24th International Conference on World Wide Web, page 1085–1088.
  42. Link analysis for representing and retrieving legal information. In Computational Linguistics and Intelligent Text Processing - 14th International Conference, CICLing 2013, Samos, Greece, March 24-30, 2013, Proceedings, Part II, volume 7817, pages 380–393. Springer.
  43. OpenAI. 2023. Gpt-4 technical report.
  44. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744.
  45. Jay M Ponte and W Bruce Croft. 2017. A language modeling approach to information retrieval. In ACM SIGIR Forum, volume 51, pages 202–208. ACM New York, NY, USA.
  46. Henry Prakken. 1997. The Role of Logic in Legal Reasoning, pages 15–31. Springer Netherlands, Dordrecht.
  47. Henry Prakken and Giovanni Sartor. 2015. Law and logic: A review from an argumentation perspective. Artificial Intelligence, 227:214–245.
  48. Incorporating judgment prediction into legal case retrieval via law-aware generative retrieval. arXiv preprint arXiv:2312.09591.
  49. Reliable natural language understanding with large language models and answer set programming.
  50. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992.
  51. Logical neural networks. arXiv preprint arXiv:2006.13155.
  52. Okapi at trec-3. Nist Special Publication Sp, 109:109.
  53. Gerard Salton and Christopher Buckley. 1988. Term-weighting approaches in automatic text retrieval. Information processing & management, 24(5):513–523.
  54. Improving legal information retrieval using an ontological framework. Artificial Intelligence and Law, 17:101–124.
  55. Bert-pli: Modeling paragraph-level interactions for legal case retrieval. In IJCAI, pages 3501–3507.
  56. Understandingunderstanding relevance judgments in legal case retrieval. ACM Trans. Inf. Syst.
  57. Investigating user behavior in legal case retrieval. In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval, pages 962–972.
  58. Neural logic reasoning. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 1365–1374.
  59. Zhongxiang Sun. 2023. A short survey of viewing large language models in legal aspect. arXiv preprint arXiv:2303.09136.
  60. Law article-enhanced legal case matching: A causal learning approach. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’23, page 1549–1558, New York, NY, USA. Association for Computing Machinery.
  61. Explainable legal case matching via graph optimal transport. IEEE Transactions on Knowledge and Data Engineering.
  62. Wenya Wang and Sinno Jialin Pan. 2020. Integrating deep learning with logic fusion for information extraction. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 9225–9232.
  63. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903.
  64. Weakly supervised explainable phrasal reasoning with neural fuzzy logic. arXiv preprint arXiv:2109.08927.
  65. Lawformer: A pre-trained language model for chinese legal long documents. AI Open, 2:79–84.
  66. Legal prompting: Teaching a language model to think like a lawyer. arXiv preprint arXiv:2212.01326.
  67. Optimal partial transport based sentence selection for long-form document matching. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2363–2373, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
  68. Explainable legal case matching via inverse optimal transport-based rationale extraction. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 657–668.
  69. Lotfi A Zadeh. 1988. Fuzzy logic. Computer, 21(4):83–93.
  70. Knowledge representation for the intelligent legal case retrieval. In Knowledge-Based Intelligent Information and Engineering Systems: 9th International Conference, KES 2005, Melbourne, Australia, September 14-16, 2005, Proceedings, Part I 9, pages 339–345. Springer.
  71. Legal judgment prediction via topological learning. In Proceedings of the 2018 conference on empirical methods in natural language processing, pages 3540–3549.
  72. Open chinese language pre-trained model zoo. Technical report.
Citations (3)

Summary

We haven't generated a summary for this paper yet.