Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How do Large Language Models Navigate Conflicts between Honesty and Helpfulness? (2402.07282v2)

Published 11 Feb 2024 in cs.CL, cs.AI, and cs.LG

Abstract: In day-to-day communication, people often approximate the truth - for example, rounding the time or omitting details - in order to be maximally helpful to the listener. How do LLMs handle such nuanced trade-offs? To address this question, we use psychological models and experiments designed to characterize human behavior to analyze LLMs. We test a range of LLMs and explore how optimization for human preferences or inference-time reasoning affects these trade-offs. We find that reinforcement learning from human feedback improves both honesty and helpfulness, while chain-of-thought prompting skews LLMs towards helpfulness over honesty. Finally, GPT-4 Turbo demonstrates human-like response patterns including sensitivity to the conversational framing and listener's decision context. Our findings reveal the conversational values internalized by LLMs and suggest that even these abstract values can, to a degree, be steered by zero-shot prompting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ryan Liu (21 papers)
  2. Theodore R. Sumers (16 papers)
  3. Ishita Dasgupta (35 papers)
  4. Thomas L. Griffiths (150 papers)
Citations (7)
X Twitter Logo Streamline Icon: https://streamlinehq.com