The Confidence-Competence Gap in Large Language Models: A Cognitive Study (2309.16145v1)
Abstract: LLMs have acquired ubiquitous attention for their performances across diverse domains. Our study here searches through LLMs' cognitive abilities and confidence dynamics. We dive deep into understanding the alignment between their self-assessed confidence and actual performance. We exploit these models with diverse sets of questionnaires and real-world scenarios and extract how LLMs exhibit confidence in their responses. Our findings reveal intriguing instances where models demonstrate high confidence even when they answer incorrectly. This is reminiscent of the Dunning-Kruger effect observed in human psychology. In contrast, there are cases where models exhibit low confidence with correct answers revealing potential underestimation biases. Our results underscore the need for a deeper understanding of their cognitive processes. By examining the nuances of LLMs' self-assessment mechanism, this investigation provides noteworthy revelations that serve to advance the functionalities and broaden the potential applications of these formidable LLMs.
- Attention is all you need. In NIPS, 2017.
- A survey of large language models. ArXiv, abs/2303.18223, 2023.
- Large language models encode clinical knowledge. Nature, 7 2023.
- A survey on large language model based autonomous agents. ArXiv, abs/2308.11432, 2023.
- Efficiently measuring the cognitive ability of llms: An adaptive testing perspective. ArXiv, abs/2306.10512, 2023.
- Probing the psychology of ai models. Proceedings of the National Academy of Sciences of the United States of America, 120, 2023.
- Emotionally numb or empathetic? evaluating how llms feel using emotionbench. ArXiv, abs/2308.03656, 2023.
- Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments, 1999.
- David Dunning. The dunning-kruger effect. On being ignorant of one’s own ignorance, volume 44. 2011.
- Can ai language models replace human participants?, 7 2023.
- FX. Risang Baskara. The promises and pitfalls of using chat gpt for self-determined learning in higher education: An argumentative review. Prosiding Seminar Nasional Fakultas Tarbiyah dan Ilmu Keguruan IAIM Sinjai, 2:95–101, 5 2023.
- Training language models to follow instructions with human feedback. volume 35, pages 27730–27744. Curran Associates, Inc., 2022.
- Holistic evaluation of language models; holistic evaluation of language models, 2022.
- Toolformer: Language models can teach themselves to use tools. 2 2023.
- Enhancing large language models with climate resources. 3 2023.
- Learning and evaluating general linguistic intelligence. 1 2019.
- Large language models show human-like content biases in transmission chain experiments. 2023.
- Capturing failures of large language models via human cognitive biases. ArXiv, abs/2202.12299, 2022.
- Cognitive mirage: A review of hallucinations in large language models. ArXiv, abs/2309.06794, 2023.
- Large language models (llms) and empathy - a systematic review. In medRxiv, 2023.
- Large language models can self-improve. ArXiv, abs/2210.11610, 2022.
- Generating with confidence: Uncertainty quantification for black-box large language models. ArXiv, abs/2305.19187, 2023.
- Measuring massive multitask language understanding. Proceedings of the International Conference on Learning Representations (ICLR), 2021.
- Aligning ai with shared human values. Proceedings of the International Conference on Learning Representations (ICLR), 2021.
- Ar-lsat: Investigating analytical reasoning of text, 2021.
- From lsat: The progress and challenges of complex reasoning. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022.
- Sonal Sareen. Chain of thoughts vs tree of thoughts for language learning models (llms), May 2023.
- Aniket Kumar Singh (5 papers)
- Suman Devkota (5 papers)
- Bishal Lamichhane (9 papers)
- Uttam Dhakal (4 papers)
- Chandra Dhakal (3 papers)