Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Influence of Gender and Race in Romantic Relationship Prediction from Large Language Models (2410.03996v1)

Published 5 Oct 2024 in cs.CL

Abstract: We study the presence of heteronormative biases and prejudice against interracial romantic relationships in LLMs by performing controlled name-replacement experiments for the task of relationship prediction. We show that models are less likely to predict romantic relationships for (a) same-gender character pairs than different-gender pairs; and (b) intra/inter-racial character pairs involving Asian names as compared to Black, Hispanic, or White names. We examine the contextualized embeddings of first names and find that gender for Asian names is less discernible than non-Asian names. We discuss the social implications of our findings, underlining the need to prioritize the development of inclusive and equitable technology.

Summary

  • The paper demonstrates that LLMs show significant heteronormative and racial biases in predicting romantic relationships.
  • Controlled name-replacement experiments reveal lower prediction accuracy for same-gender and Asian name pairings.
  • Findings emphasize the need to develop inclusive AI models that mitigate these biases and promote equitable relationship representations.

Analyzing Biases in Romantic Relationship Prediction from LLMs

The paper, "On the Influence of Gender and Race in Romantic Relationship Prediction from LLMs," explores the biases inherent in LLMs concerning romantic relationship predictions. Specifically, the paper scrutinizes heteronormative biases and prejudices against interracial relationships.

Research Context and Motivation

In the field of natural language understanding, identifying romantic relationships from dialogue is an intricate task, often influenced by the gender, race, or ethnicity inferred from names. The primary hypothesis of the paper is that LLMs, much like societal norms, may exhibit biases that favor heteronormative relationships and discriminate against interracial ones. Such biases can perpetuate stereotypes and marginalize groups that do not conform to conventional societal norms.

Experimental Setup

The research employs controlled name-replacement experiments to assess biases in LLM predictions. The dataset comprises dialogues from movie scripts, with character names systematically replaced to observe changes in relationship predictions. The paper leverages Llama2 and Mistral models to evaluate the predictive accuracy concerning gender and race pairings.

Key aspects of the setup include:

  • Task Definition: Prediction of relationship types based on character dialogues.
  • Models and Dataset: Use of Llama2 and Mistral models, and the DDRel dataset with pre-defined relationship types.
  • Comparison Metrics: Emphasis on recall for romantic predictions within same-gender and interracial pairings.

Findings

The paper's findings point to pronounced biases in LLMs:

  • Same-Gender Bias: Models are less likely to predict romantic relationships in same-gender pairings compared to different-gender ones. This indicates a heteronormative bias, where relationships deviating from conventional norms are under-recognized.
  • Racial Bias: Predictions involving Asian names show a lower recall for romantic relationships, suggesting that the model struggles to discern gender and thus underestimates romantic links for these names. This may amplify existing societal biases against Asians.
  • Interracial Relationships: While there is evidence of racial bias, strong prejudice against interracial pairings was not observed for non-Asian names.

The analysis reveals that gender information inferred from names significantly affects relationship predictions, whereas racial information has a lesser impact.

Implications and Future Directions

The implications of these biases are significant, potentially leading to both representational and allocational harms in applications relying on LLMs, such as story generation or personalized advertising. Failure to accurately predict romantic relationships, especially in same-gender contexts, risks marginalizing LGBTQIA+ communities further.

Future research should prioritize the development of more inclusive models that mitigate these biases. Expanding datasets to include diverse narratives and employing in-context learning could enhance model robustness. Moreover, continuous assessment and refinement of prompt designs could improve prediction accuracy and fairness across diverse demographic intersections.

Conclusion

This research illuminates critical biases in LLMs regarding relationship prediction, highlighting the influence of gender and race in perpetuating societal norms. By addressing these biases, the development of equitable AI systems that accurately represent diverse societal dynamics can be better ensured, fostering respect for minority rights and contributions.

X Twitter Logo Streamline Icon: https://streamlinehq.com