How well do SOTA legal reasoning models support abductive reasoning? (2304.06912v2)
Abstract: We examine how well the state-of-the-art (SOTA) models used in legal reasoning support abductive reasoning tasks. Abductive reasoning is a form of logical inference in which a hypothesis is formulated from a set of observations, and that hypothesis is used to explain the observations. The ability to formulate such hypotheses is important for lawyers and legal scholars as it helps them articulate logical arguments, interpret laws, and develop legal theories. Our motivation is to consider the belief that deep learning models, especially LLMs, will soon replace lawyers because they perform well on tasks related to legal text processing. But to do so, we believe, requires some form of abductive hypothesis formation. In other words, while LLMs become more popular and powerful, we want to investigate their capacity for abductive reasoning. To pursue this goal, we start by building a logic-augmented dataset for abductive reasoning with 498,697 samples and then use it to evaluate the performance of a SOTA model in the legal field. Our experimental results show that although these models can perform well on tasks related to some aspects of legal text processing, they still fall short in supporting abductive reasoning tasks.
- Attention is all you need, in: Advances in Neural Information Processing Systems, 2017, pp. 5998–6008.
- Bert-pli: Modeling paragraph-level interactions for legal case retrieval, in: C. Bessiere (Ed.), Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, International Joint Conferences on Artificial Intelligence Organization, Yokohama, Japan, 2020, pp. 3501–3507. URL: https://doi.org/10.24963/ijcai.2020/484. doi:10.24963/ijcai.2020/484, main track.
- Legal-bert: The muppets straight out of law school, arXiv preprint arXiv:2010.02559 (2020).
- Jnlp team: Deep learning for legal processing in coliee 2020, arXiv preprint arXiv:2011.08071 (2020).
- Encoded summarization: summarizing documents into continuous vector space for legal case retrieval, Artificial Intelligence and Law 28 (2020) 441–467.
- Bert-based ensemble methods with data augmentation for legal textual entailment in coliee statute law task, in: Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law, 2021, pp. 278–284.
- Attentive deep neural networks for legal document retrieval, Artificial Intelligence and Law (2022) 1–30.
- Overview and discussion of the competition on legal information extraction/entailment (coliee) 2021, The Review of Socionetwork Strategies 16 (2022) 111–133.
- K. Abimbola, Abductive reasoning in law: Taxonomy and inference to the best explanation, CARDozo L. REv. 22 (2000) 1683.
- D. A. Schum, Species of abductive reasoning in fact investigation in law, in: The Dynamics of Judicial Proof, Springer, 2002, pp. 307–336.
- Abductive commonsense reasoning, in: 8th International Conference on Learning Representations, ICLR 2020, OpenReview.net, Addis Ababa, Ethiopia, 2020. URL: https://openreview.net/forum?id=Byg1v1HKDB.
- Answering legal questions by learning neural attentive text representation, in: Proceedings of the 28th International Conference on Computational Linguistics, International Committee on Computational Linguistics, Barcelona, Spain (Online), 2020, pp. 988–998. URL: https://aclanthology.org/2020.coling-main.86. doi:10.18653/v1/2020.coling-main.86.
- XOR QA: Cross-lingual open-retrieval question answering, in: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, Online, 2021, pp. 547–564. URL: https://aclanthology.org/2021.naacl-main.46. doi:10.18653/v1/2021.naacl-main.46.
- Revealing the importance of semantic retrieval for machine reading at scale, in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics, Hong Kong, China, 2019, pp. 2553–2566. URL: https://aclanthology.org/D19-1258. doi:10.18653/v1/D19-1258.
- J. Lee, C. Y. Yeung, Text retrieval for language learners: Graded vocabulary vs. open learner model, in: Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), INCOMA Ltd., Held Online, 2021, pp. 798–804. URL: https://aclanthology.org/2021.ranlp-1.91.
- Image retrieval from contextual descriptions, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Dublin, Ireland, 2022, pp. 3426–3440. URL: https://aclanthology.org/2022.acl-long.241. doi:10.18653/v1/2022.acl-long.241.
- TriggerNER: Learning with entity triggers as explanations for named entity recognition, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Online, 2020, pp. 8503–8511. URL: https://aclanthology.org/2020.acl-main.752. doi:10.18653/v1/2020.acl-main.752.
- Named entity recognition in tweets: An experimental study, in: Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Edinburgh, Scotland, UK., 2011, pp. 1524–1534. URL: https://aclanthology.org/D11-1141.
- A summary of the COLIEE 2019 competition, in: New Frontiers in Artificial Intelligence, Springer International Publishing, Online, 2020, pp. 34–49.
- A summary of the ALQAC 2021 competition, in: 2021 13th International Conference on Knowledge and Systems Engineering (KSE), IEEE, Bangkok, Thailand, 2021, pp. 1–5.
- ALQAC 2022: A summary of the competition, in: 2022 14th International Conference on Knowledge and Systems Engineering (KSE), IEEE, Nha Trang, Vietnam, 2022a, pp. 1–5. doi:10.1109/kse56063.2022.9953764.
- Transformer-based approaches for legal text processing, The Review of Socionetwork Strategies 16 (2022b) 135–155. doi:10.1007/s12626-022-00102-2.
- A. Louis, G. Spanakis, A statutory article retrieval dataset in French, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Dublin, Ireland, 2022, pp. 6789–6803. URL: https://aclanthology.org/2022.acl-long.468. doi:10.18653/v1/2022.acl-long.468.
- SM-BERT-CR: a deep learning approach for case law retrieval with supporting model, Artificial Intelligence and Law (2022). doi:10.1007/s10506-022-09319-6.
- Abductive logic programming, Journal of logic and computation 2 (1992) 719–770.
- Supporting complaints investigation for nursing and midwifery regulatory agencies, in: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, 2021, pp. 81–91.
- Use of artificial intelligence in regulatory decision-making, Journal of Nursing Regulation 12 (2021) 11–19.
- Transformers as soft reasoners over language, in: C. Bessiere (Ed.), Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, International Joint Conferences on Artificial Intelligence Organization, Yokohama, Japan, 2020, pp. 3882–3890. URL: https://doi.org/10.24963/ijcai.2020/537. doi:10.24963/ijcai.2020/537, main track.
- Logically consistent adversarial attacks for soft theorem provers, in: L. D. Raedt (Ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, International Joint Conferences on Artificial Intelligence Organization, Vienna, Austria, 2022, pp. 4129–4135. URL: https://doi.org/10.24963/ijcai.2022/573. doi:10.24963/ijcai.2022/573, main Track.
- Bert: Pre-training of deep bidirectional transformers for language understanding, arXiv preprint arXiv:1810.04805 (2018).
- Jnlp team: Deep learning approaches for legal processing tasks in coliee 2021, arXiv preprint arXiv:2106.13405 (2021).
- Language models are few-shot learners, Advances in neural information processing systems 33 (2020) 1877–1901.
- Ha-Thanh Nguyen (33 papers)
- Randy Goebel (29 papers)
- Francesca Toni (96 papers)
- Kostas Stathis (20 papers)
- Ken Satoh (27 papers)