Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Surprisingly Robust Trick for Winograd Schema Challenge (1905.06290v2)

Published 15 May 2019 in cs.CL

Abstract: The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning. In this paper, we show that the performance of three LLMs on WSC273 strongly improves when fine-tuned on a similar pronoun disambiguation problem dataset (denoted WSCR). We additionally generate a large unsupervised WSC-like dataset. By fine-tuning the BERT LLM both on the introduced and on the WSCR dataset, we achieve overall accuracies of 72.5% and 74.7% on WSC273 and WNLI, improving the previous state-of-the-art solutions by 8.8% and 9.6%, respectively. Furthermore, our fine-tuned models are also consistently more robust on the "complex" subsets of WSC273, introduced by Trichelair et al. (2018).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Vid Kocijan (11 papers)
  2. Ana-Maria Cretu (13 papers)
  3. Oana-Maria Camburu (29 papers)
  4. Yordan Yordanov (8 papers)
  5. Thomas Lukasiewicz (125 papers)
Citations (101)

Summary

We haven't generated a summary for this paper yet.