Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Capturing Row and Column Semantics in Transformer Based Question Answering over Tables (2104.08303v2)

Published 16 Apr 2021 in cs.AI and cs.CL

Abstract: Transformer based architectures are recently used for the task of answering questions over tables. In order to improve the accuracy on this task, specialized pre-training techniques have been developed and applied on millions of open-domain web tables. In this paper, we propose two novel approaches demonstrating that one can achieve superior performance on table QA task without even using any of these specialized pre-training techniques. The first model, called RCI interaction, leverages a transformer based architecture that independently classifies rows and columns to identify relevant cells. While this model yields extremely high accuracy at finding cell values on recent benchmarks, a second model we propose, called RCI representation, provides a significant efficiency advantage for online QA systems over tables by materializing embeddings for existing tables. Experiments on recent benchmarks prove that the proposed methods can effectively locate cell values on tables (up to ~98% Hit@1 accuracy on WikiSQL lookup questions). Also, the interaction model outperforms the state-of-the-art transformer based approaches, pre-trained on very large table corpora (TAPAS and TaBERT), achieving ~3.4% and ~18.86% additional precision improvement on the standard WikiSQL benchmark.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Michael Glass (21 papers)
  2. Mustafa Canim (11 papers)
  3. Alfio Gliozzo (28 papers)
  4. Saneem Chemmengath (5 papers)
  5. Vishwajeet Kumar (23 papers)
  6. Rishav Chakravarti (11 papers)
  7. Avi Sil (2 papers)
  8. Feifei Pan (5 papers)
  9. Samarth Bharadwaj (11 papers)
  10. Nicolas Rodolfo Fauceglia (2 papers)
Citations (52)

Summary

Capturing Row and Column Semantics in Transformer-Based Question Answering over Tables

The research paper titled "Capturing Row and Column Semantics in Transformer-Based Question Answering over Tables" explores novel methodologies for enhancing the accuracy of question-answering (QA) systems over tabular data using transformer architectures. This paper specifically addresses the challenge of navigating structured table data to derive precise answers to natural language questions without relying on specialized pre-training techniques. Two innovative models are proposed: the Row-Column Intersection (RCI) interaction model and the RCI representation model.

The primary focus of this work is on addressing lookup questions, which involve returning specific strings from a table, as opposed to aggregation questions that necessitate performing arithmetic operations. The paper argues for the verifiability of answers to lookup questions, thereby emphasizing their practical importance. Nonetheless, the proposed methods also demonstrate competent performance on aggregation questions.

Novel Model Approaches

  1. RCI Interaction Model:
    • This model employs a transformer to independently classify rows and columns to determine the likelihood of containing the answer to a query. By calculating the intersection of these probabilities, the confidence level for each cell is established.
    • It achieves high precision rates on benchmarks like WikiSQL, surpassing state-of-the-art approaches pre-trained on extensive corpora. Specifically, the interaction model records a precision gain of 3.4% and 18.86% over TAPAS and TaBERT models, respectively.
  2. RCI Representation Model:
    • Aimed at efficiency, the representation model allows for the pre-computation of embeddings for all rows and columns in a table corpus. This pre-processing step is advantageous for online query processing as embeddings can be reused, significantly lowering computational demands for real-time querying.
    • This model still delivers competitive accuracy, notably achieving a Hit@1 accuracy of approximately 98% on WikiSQL.

Baseline and Evaluation

A robust baseline is established using a machine reading comprehension (MRC) model fine-tuned on datasets such as SQuAD and Natural Questions. This baseline is adapted to recognize relevant cells within tables, serving as a reference point for the evaluation of the proposed models.

The research evaluates the performance of the proposed models on three datasets: WikiSQL, TabMCQ, and WikiTableQuestions, with a focus on tasks involving lookup questions. The models are assessed via metrics like Mean Reciprocal Rank (MRR) and Hit@1, with the RCI models consistently outperforming both the baseline and other state-of-the-art systems across these benchmarks.

Implications and Future Directions

The RCI models demonstrate a significant advancement in efficiently handling structured table data for QA tasks using transformers without additional pre-training on large-scale tabular data. This work contributes to the broader field of natural language processing by providing an efficient and scalable approach to interacting with tables, which are prevalent in various information systems.

Looking forward, potential directions include extending these models to leverage domain-specific knowledge bases and taxonomies, enhancing their applicability to specialized domains like finance or healthcare. Further research could explore improving row and column contextualization in scenarios where these elements are interdependent, thereby addressing certain limitations identified during the error analysis.

In summary, this paper provides a comprehensive methodology for improving table-based QA accuracy by focusing on row and column semantics, demonstrating that effective solutions can be achieved without the necessity for extensive pre-training on dedicated table datasets.

Youtube Logo Streamline Icon: https://streamlinehq.com