Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence Reasoning (2303.05670v1)

Published 10 Mar 2023 in cs.CL, cs.AI, and cs.CY

Abstract: Due to their similarity-based learning objectives, pretrained sentence encoders often internalize stereotypical assumptions that reflect the social biases that exist within their training corpora. In this paper, we describe several kinds of stereotypes concerning different communities that are present in popular sentence representation models, including pretrained next sentence prediction and contrastive sentence representation models. We compare such models to textual entailment models that learn language logic for a variety of downstream language understanding tasks. By comparing strong pretrained models based on text similarity with textual entailment learning, we conclude that the explicit logic learning with textual entailment can significantly reduce bias and improve the recognition of social communities, without an explicit de-biasing process

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Hongyin Luo (31 papers)
  2. James Glass (173 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.