Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving negation detection with negation-focused pre-training (2205.04012v1)

Published 9 May 2022 in cs.CL

Abstract: Negation is a common linguistic feature that is crucial in many language understanding tasks, yet it remains a hard problem due to diversity in its expression in different types of text. Recent work has shown that state-of-the-art NLP models underperform on samples containing negation in various tasks, and that negation detection models do not transfer well across domains. We propose a new negation-focused pre-training strategy, involving targeted data augmentation and negation masking, to better incorporate negation information into LLMs. Extensive experiments on common benchmarks show that our proposed approach improves negation detection performance and generalizability over the strong baseline NegBERT (Khandewal and Sawant, 2020).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Thinh Hung Truong (9 papers)
  2. Timothy Baldwin (125 papers)
  3. Trevor Cohn (105 papers)
  4. Karin Verspoor (34 papers)
Citations (18)