Accountability for Errors When Using LLMs in Peer Review and Evaluation
Establish responsibility and accountability frameworks specifying who should be held liable for errors made by large language models when they are used in tasks such as peer review, manuscript assessment, or proposal evaluation.
References
While we can hold people responsible for misinterpreting a proposal or an article, it is unclear who should be held responsible if the machine makes an error.
                — What is the Role of Large Language Models in the Evolution of Astronomy Research?
                
                (2409.20252 - Fouesneau et al., 30 Sep 2024) in Section: Ethical and Legal Concerns — Research-specific Concerns