Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification (2109.12349v1)

Published 25 Sep 2021 in cs.CL

Abstract: This paper presents an end-to-end system for fact extraction and verification using textual and tabular evidence, the performance of which we demonstrate on the FEVEROUS dataset. We experiment with both a multi-task learning paradigm to jointly train a graph attention network for both the task of evidence extraction and veracity prediction, as well as a single objective graph model for solely learning veracity prediction and separate evidence extraction. In both instances, we employ a framework for per-cell linearization of tabular evidence, thus allowing us to treat evidence from tables as sequences. The templates we employ for linearizing tables capture the context as well as the content of table data. We furthermore provide a case study to show the interpretability our approach. Our best performing system achieves a FEVEROUS score of 0.23 and 53% label accuracy on the blind test data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Neema Kotonya (9 papers)
  2. Thomas Spooner (10 papers)
  3. Daniele Magazzeni (42 papers)
  4. Francesca Toni (96 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.