Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Detection of Subjective Bias using Contextualized Word Embeddings (2002.06644v1)

Published 16 Feb 2020 in cs.CL

Abstract: Subjective bias detection is critical for applications like propaganda detection, content recommendation, sentiment analysis, and bias neutralization. This bias is introduced in natural language via inflammatory words and phrases, casting doubt over facts, and presupposing the truth. In this work, we perform comprehensive experiments for detecting subjective bias using BERT-based models on the Wiki Neutrality Corpus(WNC). The dataset consists of $360k$ labeled instances, from Wikipedia edits that remove various instances of the bias. We further propose BERT-based ensembles that outperform state-of-the-art methods like $BERT_{large}$ by a margin of $5.6$ F1 score.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Tanvi Dadu (6 papers)
  2. Kartikey Pant (7 papers)
  3. Radhika Mamidi (47 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.