Interpreting the causes of hallucinations in large language models
Develop interpretability methods and causal diagnostics that explain why large language models hallucinate, including identifying query- and model-specific mechanisms that lead to hallucinations in retrieval-augmented generation systems used for legal research.
References
Interpreting why an LLM hallucinates is an open problem.
— Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools
(2405.20362 - Magesh et al., 30 May 2024) in Section 6.4 (A Typology of Legal RAG Errors)