Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AI's assigned gender affects human-AI cooperation (2412.05214v1)

Published 6 Dec 2024 in cs.CY, cs.AI, cs.GT, and cs.HC

Abstract: Cooperation between humans and machines is increasingly vital as AI becomes more integrated into daily life. Research indicates that people are often less willing to cooperate with AI agents than with humans, more readily exploiting AI for personal gain. While prior studies have shown that giving AI agents human-like features influences people's cooperation with them, the impact of AI's assigned gender remains underexplored. This study investigates how human cooperation varies based on gender labels assigned to AI agents with which they interact. In the Prisoner's Dilemma game, 402 participants interacted with partners labelled as AI (bot) or humans. The partners were also labelled male, female, non-binary, or gender-neutral. Results revealed that participants tended to exploit female-labelled and distrust male-labelled AI agents more than their human counterparts, reflecting gender biases similar to those in human-human interactions. These findings highlight the significance of gender biases in human-AI interactions that must be considered in future policy, design of interactive AI systems, and regulation of their use.

Summary

  • The paper finds that gender labels significantly affect cooperation rates, with female-labelled agents achieving 58.6% cooperation compared to 39.7% for male-labelled ones.
  • It employs an online Prisoner’s Dilemma experiment with 402 participants to compare human-AI interactions under various gender assignments.
  • Insights suggest that AI design must mitigate inherent gender biases to foster equitable and trustworthy human-AI cooperation.

The Influence of AI's Assigned Gender on Human-AI Cooperation

The paper examines the impact of gender labels assigned to AI agents on human cooperation, specifically in mixed-motive scenarios such as the Prisoner's Dilemma game. As artificial intelligence becomes more pervasive in society, understanding how human cooperation with AI differs from cooperation with humans is crucial. The paper builds upon existing research which suggests that while people generally cooperate less with AI agents than with humans, this dynamic may be influenced by the anthropomorphic characteristics assigned to AI, such as gender.

The researchers deployed an online experiment involving 402 participants who interacted with partners labelled as either humans or AI agents in a series of Prisoner’s Dilemma games. The partners were further categorized as male, female, non-binary, or gender-neutral. The paper aimed to determine whether human cooperation rates with AI would align with their cooperation rates with humans when the AI was assigned a specific gender and whether the gender biases observed in human-human interactions extend to human-AI interactions.

Key Findings and Results

  1. Cooperation Rates with AI vs. Humans: Participants were slightly more cooperative with humans than with AI agents, though the difference was not statistically significant. This finding contrasts with previous studies showing a more pronounced reluctance to cooperate with AI compared to humans. However, the motives for cooperation and defection varied between AI and human partners, indicating an underlying bias.
  2. Gender-Based Cooperation Differences: Participants cooperated more with female-labelled partners than with male or other gender-labelled partners, regardless of the partner being human or AI. Specifically, the cooperation rate was highest with females (58.6%) and lowest with males (39.7%). The motivation behind higher cooperation with female-labelled partners was attributed to participants' optimism about achieving mutually beneficial outcomes, whereas lower cooperation with male-labelled partners was associated with a lack of trust.
  3. Exploitation vs. Trust Motives: When defecting against human partners, participants were more likely to do so due to distrust rather than exploitation. In contrast, in interactions with AI agents, exploitation motives were more prevalent. Notably, the exploitation motive was particularly strong when participants interacted with female AI partners.
  4. Participant Gender Influence: Female participants exhibited higher cooperation rates than male participants, both in human-human and human-AI interactions. This higher cooperation was consistent with past research highlighting gender differences in cooperation, where females exhibited greater tendencies towards cooperation.
  5. Cultural Consistency: The paper was conducted in the UK, and while the results align with existing literature on gender biases in human cooperation, cross-cultural variations might influence these dynamics. Future studies should explore these variations to generalize the findings across different cultural contexts.

Implications and Future Research

The findings underscore the complexity of human-AI interaction dynamics and the role of gender biases in shaping these interactions. Assigning a gender to AI agents may enhance cooperation rates, but it simultaneously introduces human-like biases that could perpetuate undesirable stereotypes and exploitative behaviors.

From a practical standpoint, designing AI systems that incorporate gender characteristics necessitates careful consideration of potential biases. Developers and policy-makers should be aware of these biases and work towards mitigating them through thoughtful design and regulation.

Theoretically, understanding how anthropomorphic features like gender influence human-AI interaction provides valuable insights into social dynamics and can inform the design of more effective AI systems. Further research is warranted to explore repeated interactions and their long-term effects on human-AI cooperation, as well as the impact of other identity features such as ethnicity and age in human-AI interactions. Additionally, investigating strategies to counteract negative biases without losing the benefits of anthropomorphism could be an important area of inquiry in developing equitable and trustworthy AI technologies.

X Twitter Logo Streamline Icon: https://streamlinehq.com