Solvability of Hallucinations in Large Language Models
Determine whether the hallucination phenomenon observed in large language models can be resolved, or whether it is an inherent limitation of current large language model architectures and training paradigms.
References
It is unclear if hallucinations are a solvable problem.
                — What is the Role of Large Language Models in the Evolution of Astronomy Research?
                
                (2409.20252 - Fouesneau et al., 30 Sep 2024) in Section: Limitations and Responsible Use (Discussion)