Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs (2406.01168v2)

Published 3 Jun 2024 in econ.GN, cs.AI, cs.CY, cs.ET, cs.HC, and q-fin.EC

Abstract: This study examines the risk preferences of LLMs and how aligning them with human ethical standards affects their economic decision-making. Analyzing 30 LLMs reveals a range of inherent risk profiles, from risk-averse to risk-seeking. We find that aligning LLMs with human values, focusing on harmlessness, helpfulness, and honesty, shifts them towards risk aversion. While some alignment improves investment forecast accuracy, excessive alignment leads to overly cautious predictions, potentially resulting in severe underinvestment. Our findings highlight the need for a nuanced approach that balances ethical alignment with the specific requirements of economic domains when using LLMs in finance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Shumiao Ouyang (1 paper)
  2. Hayong Yun (2 papers)
  3. Xingjian Zheng (2 papers)
Citations (2)