Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Temporal Understanding in LLMs for Semi-structured Tables (2407.16030v1)

Published 22 Jul 2024 in cs.CL, cs.AI, cs.DB, and cs.LG

Abstract: Temporal reasoning over tabular data presents substantial challenges for LLMs, as evidenced by recent research. In this study, we conduct a comprehensive analysis of temporal datasets to pinpoint the specific limitations of LLMs. Our investigation leads to enhancements in TempTabQA, a dataset specifically designed for tabular temporal question answering. We provide critical insights for improving LLM performance in temporal reasoning tasks with tabular data. Furthermore, we introduce a novel approach, C.L.E.A.R to strengthen LLM capabilities in this domain. Our findings demonstrate that our method significantly improves evidence-based reasoning across various models. Additionally, our experimental results reveal that indirect supervision with auxiliary data substantially boosts model performance in these tasks. This work contributes to a deeper understanding of LLMs' temporal reasoning abilities over tabular data and promotes advancements in their application across diverse fields.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Irwin Deng (1 paper)
  2. Kushagra Dixit (3 papers)
  3. Vivek Gupta (74 papers)
  4. Dan Roth (222 papers)
Citations (1)