Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards an Enhanced Understanding of Bias in Pre-trained Neural Language Models: A Survey with Special Emphasis on Affective Bias (2204.10365v1)

Published 21 Apr 2022 in cs.CL and cs.AI

Abstract: The remarkable progress in NLP brought about by deep learning, particularly with the recent advent of large pre-trained neural LLMs, is brought into scrutiny as several studies began to discuss and report potential biases in NLP applications. Bias in NLP is found to originate from latent historical biases encoded by humans into textual data which gets perpetuated or even amplified by NLP algorithm. We present a survey to comprehend bias in large pre-trained LLMs, analyze the stages at which they occur in these models, and various ways in which these biases could be quantified and mitigated. Considering wide applicability of textual affective computing based downstream tasks in real-world systems such as business, healthcare, education, etc., we give a special emphasis on investigating bias in the context of affect (emotion) i.e., Affective Bias, in large pre-trained LLMs. We present a summary of various bias evaluation corpora that help to aid future research and discuss challenges in the research on bias in pre-trained LLMs. We believe that our attempt to draw a comprehensive view of bias in pre-trained LLMs, and especially the exploration of affective bias will be highly beneficial to researchers interested in this evolving field.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Manjary P. Gangan (6 papers)
  2. Anoop K. (1 paper)
  3. Deepak P. (4 papers)
  4. Lajish V. L (4 papers)
Citations (10)