Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Survey on the Honesty of Large Language Models (2409.18786v1)

Published 27 Sep 2024 in cs.CL and cs.AI

Abstract: Honesty is a fundamental principle for aligning LLMs with human values, requiring these models to recognize what they know and don't know and be able to faithfully express their knowledge. Despite promising, current LLMs still exhibit significant dishonest behaviors, such as confidently presenting wrong answers or failing to express what they know. In addition, research on the honesty of LLMs also faces challenges, including varying definitions of honesty, difficulties in distinguishing between known and unknown knowledge, and a lack of comprehensive understanding of related research. To address these issues, we provide a survey on the honesty of LLMs, covering its clarification, evaluation approaches, and strategies for improvement. Moreover, we offer insights for future research, aiming to inspire further exploration in this important area.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. Siheng Li (20 papers)
  2. Cheng Yang (168 papers)
  3. Taiqiang Wu (21 papers)
  4. Chufan Shi (15 papers)
  5. Yuji Zhang (14 papers)
  6. Xinyu Zhu (29 papers)
  7. Zesen Cheng (24 papers)
  8. Deng Cai (181 papers)
  9. Mo Yu (117 papers)
  10. Lemao Liu (62 papers)
  11. Jie Zhou (687 papers)
  12. Yujiu Yang (155 papers)
  13. Ngai Wong (82 papers)
  14. Xixin Wu (85 papers)
  15. Wai Lam (117 papers)
Citations (1)