Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

It Cannot Be Right If It Was Written by AI: On Lawyers' Preferences of Documents Perceived as Authored by an LLM vs a Human (2407.06798v2)

Published 9 Jul 2024 in cs.HC, cs.AI, and cs.CY
It Cannot Be Right If It Was Written by AI: On Lawyers' Preferences of Documents Perceived as Authored by an LLM vs a Human

Abstract: LLMs enable a future in which certain types of legal documents may be generated automatically. This has a great potential to streamline legal processes, lower the cost of legal services, and dramatically increase access to justice. While many researchers focus on proposing and evaluating LLM-based applications supporting tasks in the legal domain, there is a notable lack of investigations into how legal professionals perceive content if they believe an LLM has generated it. Yet, this is a critical point as over-reliance or unfounded scepticism may influence whether such documents bring about appropriate legal consequences. This study is the necessary analysis of the ongoing transition towards mature generative AI systems. Specifically, we examined whether the perception of legal documents' by lawyers and law students (n=75) varies based on their assumed origin (human-crafted vs AI-generated). The participants evaluated the documents, focusing on their correctness and language quality. Our analysis revealed a clear preference for documents perceived as crafted by a human over those believed to be generated by AI. At the same time, most participants expect the future in which documents will be generated automatically. These findings could be leveraged by legal practitioners, policymakers, and legislators to implement and adopt legal document generation technology responsibly and to fuel the necessary discussions on how legal processes should be updated to reflect recent technological developments.

The paper "It Cannot Be Right If It Was Written by AI: On Lawyers' Preferences of Documents Perceived as Authored by an LLM vs a Human" explores the perception of legal professionals towards documents that they believe are either human-crafted or AI-generated. The context for this paper is the growing potential for LLMs to automate the generation of legal documents. Such automation could streamline legal processes, reduce costs, and enhance access to justice.

The paper specifically addresses a gap in research by focusing on how legal professionals perceive documents when they suspect these have been generated by AI. This perception is critical since it could influence legal outcomes, with professionals possibly exhibiting over-reliance on AI or unjust skepticism towards it.

Methodology:

  • The paper evaluated 75 lawyers who were asked to assess legal documents for correctness and language quality.
  • Documents were presented with indications of their supposed origin, whether human or AI.

Findings:

  • There was a notable preference among the legal professionals for documents perceived as created by humans.
  • Despite this preference, the participants also anticipated a future where the generation of legal documents would be automated by AI.

Implications:

  • These insights could inform legal practitioners, policymakers, and legislators on responsibly adopting AI for legal documentation.
  • The findings highlight the need for discussions on updating legal processes in light of emerging AI technologies.

The paper thus contributes to understanding the acceptance and potential integration challenges of AI technologies in the legal domain, suggesting that while there is a preference for human-attributed work, there is also an openness to future AI integration.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. \bibcommenthead
  2. Asscher O, Glikson E (2023) Human evaluations of machine translation in an ethically charged situation. New Media & Society 25(5):1087–1107. 10.1177/14614448211018833
  3. Barysė D (2022) People’s Attitudes towards Technologies in Courts. Laws 11(5). 10.3390/laws11050071
  4. Bigman YE, Gray K (2018) People are averse to machines making moral decisions. Cognition 181:21–34. 10.1016/j.cognition.2018.08.003
  5. Braun V, Clarke V (2006) Using Thematic Analysis in Psychology. Qualitative Research in Psychology 3(2):77–101. 10.1191/1478088706qp063oa
  6. Castelo N, Ward AF (2021) Conservatism Predicts Aversion to Consequential Artificial Intelligence. PLOS ONE 16(12):1–19. 10.1371/journal.pone.0261467
  7. von Eschenbach WJ (2021) Transparency and the Black Box Problem: Why We Do Not Trust AI. Philosophy & Technology 34(4):1607–1622. 10.1007/s13347-021-00477-0
  8. Goodson N, Lu R (2023) Intention and Context Elicitation with Large Language Models in the Legal Aid Intake Process. arXiv:2311.13281
  9. Greco CM, Tagarelli A (2023) Bringing order into the realm of Transformer-based language models for artificial intelligence and law. Artificial Intelligence and Law 10.1007/s10506-023-09374-7
  10. Hagan M (2024) Towards Human-Centered Standards for Legal Help AI. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 382(2270):1–21. 10.1098/rsta.2023.0157
  11. Hamilton S (2023) Blind Judgement: Agent-Based Supreme Court Modelling With GPT. arXiv:2301.05327
  12. Hohenstein J, Jung M (2020) AI as a moral crumple zone: The effects of AI-mediated communication on attribution and trust. Computers in Human Behavior 106:106190. 10.1016/j.chb.2019.106190
  13. Huang J, Chang KCC (2023) Towards Reasoning in Large Language Models: A Survey. arXiv:2212.10403
  14. Kang H, Liu XY (2023) Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination. arXiv:2311.15548
  15. Lim S, Schmälzle R (2024) The Effect of Source Disclosure on Evaluation of AI-Generated Messages: A Two-part Study. Computers in Human Behavior: Artificial Humans 2(1):100058. 10.1016/j.chbah.2024.100058
  16. Martínez E (2024) Re-evaluating GPT-4’s bar exam performance. Artificial Intelligence and Law 10.1007/s10506-024-09396-9
  17. Nay JJ (2023) Large Language Models as Corporate Lobbyists. arXiv:2301.01181
  18. Perlman AM (2022) The Implications of OpenAI’s Assistant for Legal Services and Society. Available at SSRN
  19. Savelka J, Ashley KD (2023) The unreasonable effectiveness of large language models in zero-shot semantic annotation of legal texts. Frontiers in Artificial Intelligence 6. https://doi.org/10.3389/frai.2023.1279794
  20. Waddell TF (2018) A Robot Wrote This? How perceived machine authorship affects news credibility. Digital Journalism 6(2):236–255. 10.1080/21670811.2017.1384319
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Tereza Novotná (4 papers)
  2. Jaromir Savelka (47 papers)
  3. Jakub Harasta (2 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com