Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Tabular Reasoning with Pattern Exploiting Training (2210.12259v1)

Published 21 Oct 2022 in cs.CL

Abstract: Recent methods based on pre-trained LLMs have exhibited superior performance over tabular tasks (e.g., tabular NLI), despite showing inherent problems such as not using the right evidence and inconsistent predictions across inputs while reasoning over the tabular data. In this work, we utilize Pattern-Exploiting Training (PET) (i.e., strategic MLM) on pre-trained LLMs to strengthen these tabular reasoning models' pre-existing knowledge and reasoning abilities. Our upgraded model exhibits a superior understanding of knowledge facts and tabular reasoning compared to current baselines. Additionally, we demonstrate that such models are more effective for underlying downstream tasks of tabular inference on InfoTabs. Furthermore, we show our model's robustness against adversarial sets generated through various character and word level perturbations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Abhilash Reddy Shankarampeta (1 paper)
  2. Vivek Gupta (75 papers)
  3. Shuo Zhang (256 papers)
Citations (5)