Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unlocking Temporal Question Answering for Large Language Models with Tailor-Made Reasoning Logic (2305.15014v2)

Published 24 May 2023 in cs.CL

Abstract: The temporal aspect is a significant dimension of our reality. We notice the challenge that LLMs face when engaging in temporal reasoning. Our preliminary experiments show that methods involving the generation of intermediate reasoning steps, such as chain-of-thought and program-aided LLMs, do not consistently boost the performance of complex temporal question-answering tasks. This limitation can be attributed to the LLMs' inadequate understanding of temporal information. To address this problem, we propose TempLogic, a novel framework designed specifically for temporal question-answering tasks across three levels of reasoning. TempLogic incorporates retrieval-guided context distillation, temporal data extraction, and tailor-made logic reasoning. Extensive experiments and analysis demonstrate the effectiveness of our framework in solving intricate time-bound reasoning tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xingxuan Li (17 papers)
  2. Liying Cheng (16 papers)
  3. Qingyu Tan (9 papers)
  4. Hwee Tou Ng (44 papers)
  5. Shafiq Joty (187 papers)
  6. Lidong Bing (144 papers)