Papers
Topics
Authors
Recent
Search
2000 character limit reached

Enhancing Tabular Reasoning with Pattern Exploiting Training

Published 21 Oct 2022 in cs.CL | (2210.12259v1)

Abstract: Recent methods based on pre-trained LLMs have exhibited superior performance over tabular tasks (e.g., tabular NLI), despite showing inherent problems such as not using the right evidence and inconsistent predictions across inputs while reasoning over the tabular data. In this work, we utilize Pattern-Exploiting Training (PET) (i.e., strategic MLM) on pre-trained LLMs to strengthen these tabular reasoning models' pre-existing knowledge and reasoning abilities. Our upgraded model exhibits a superior understanding of knowledge facts and tabular reasoning compared to current baselines. Additionally, we demonstrate that such models are more effective for underlying downstream tasks of tabular inference on InfoTabs. Furthermore, we show our model's robustness against adversarial sets generated through various character and word level perturbations.

Citations (5)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.