Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bias in Decision-Making for AI's Ethical Dilemmas: A Comparative Study of ChatGPT and Claude (2501.10484v1)

Published 17 Jan 2025 in cs.CY and cs.AI

Abstract: Recent advances in LLMs have enabled human-like responses across various tasks, raising questions about their ethical decision-making capabilities and potential biases. This study investigates protected attributes in LLMs through systematic evaluation of their responses to ethical dilemmas. Using two prominent models - GPT-3.5 Turbo and Claude 3.5 Sonnet - we analyzed their decision-making patterns across multiple protected attributes including age, gender, race, appearance, and disability status. Through 11,200 experimental trials involving both single-factor and two-factor protected attribute combinations, we evaluated the models' ethical preferences, sensitivity, stability, and clustering of preferences. Our findings reveal significant protected attributeses in both models, with consistent preferences for certain features (e.g., "good-looking") and systematic neglect of others. Notably, while GPT-3.5 Turbo showed stronger preferences aligned with traditional power structures, Claude 3.5 Sonnet demonstrated more diverse protected attribute choices. We also found that ethical sensitivity significantly decreases in more complex scenarios involving multiple protected attributes. Additionally, linguistic referents heavily influence the models' ethical evaluations, as demonstrated by differing responses to racial descriptors (e.g., "Yellow" versus "Asian"). These findings highlight critical concerns about the potential impact of LLM biases in autonomous decision-making systems and emphasize the need for careful consideration of protected attributes in AI development. Our study contributes to the growing body of research on AI ethics by providing a systematic framework for evaluating protected attributes in LLMs' ethical decision-making capabilities.

Analyzing Bias in Ethical Decision-Making of AI: A Comparative Study of ChatGPT and Claude

The paper "Bias in Decision-Making for AI's Ethical Dilemmas: A Comparative Study of ChatGPT and Claude" addresses a critical concern in artificial intelligence research, focusing on the biases present in the decision-making processes of LLMs. The authors conducted a meticulous comparative paper of GPT-3.5 Turbo and Claude 3.5 Sonnet, scrutinizing their behavior through simulations of ethical dilemmas involving protected attributes such as age, gender, race, appearance, and disability. Through a rigorous analytical framework, the paper offers insights into the preferences, sensitivities, and biases of these LLMs, with significant implications for ethical AI development.

Experimental Methodology

The research employed a systematic evaluation involving 11,200 experimental trials, testing both single and intersectional protected attribute scenarios. By presenting the LLMs with ethical dilemmas in controlled environments, the authors were able to extract data on how these models prioritize between competing ethical choices. Specific focus was given to the normalized frequency of selected attributes, ethical preference priorities, sensitivities, stability measures, and clustering of preferences.

Key findings reveal that both models displayed significant biases toward certain attributes, such as a strong preference for individuals described as “good-looking”. GPT-3.5 Turbo was observed to display stronger alignment with traditionally dominant social structures, favoring attributes typically associated with such groups. In contrast, Claude 3.5 Sonnet exhibited a more nuanced balance, showing a wider spread of preferences across different protected attributes.

Analysis and Results

A thorough analysis showed that in single protected attribute scenarios, both models displayed high sensitivities to race and color attributes, indicating caution in ethical decision-making where these factors are involved. However, in intersectional scenarios, this sensitivity reduced, suggesting the models are more consistent in their outputs when multiple attributes interact.

Stability analysis revealed that intersectional scenarios contribute to more consistent model outputs, possibly offering a more reliable context for deployment in real-world applications. Both LLMs clustered various attributes with notable preferences for certain demographic and appearance-based attributes, emphasizing systematic differences in their ethical evaluation processes.

Implications and Future Work

The paper underscores the ethical considerations crucial for LLM integration into autonomous systems. Training data, often reflective of existing societal biases, appears to significantly influence these models, reiterating the need for transparent and accountable AI development practices. Human oversight remains paramount in mitigating these biases, particularly as LLMs become more embedded in decision-making processes.

Future research directions suggested include broadening the repertoire of protected attributes and cultural contexts. There is an evident need to further explore the impact of prompt engineering on LLM outcomes and to conduct comparative studies assessing human versus AI moral trade-offs. Addressing these areas could deepen our understanding of LLMs in complex ethical scenarios, enhancing their alignment with human ethical standards.

Overall, this paper contributes significantly to AI ethics research by providing a comprehensive framework to evaluate the biases in LLMs' ethical decision-making. It sets the stage for subsequent in-depth examinations and offers a robust foundation for developing fair and equitable AI systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yile Yan (1 paper)
  2. Yuqi Zhu (25 papers)
  3. Wentao Xu (21 papers)