2000 character limit reached
HRoT: Hybrid prompt strategy and Retrieval of Thought for Table-Text Hybrid Question Answering (2309.12669v1)
Published 22 Sep 2023 in cs.CL
Abstract: Answering numerical questions over hybrid contents from the given tables and text(TextTableQA) is a challenging task. Recently, LLMs have gained significant attention in the NLP community. With the emergence of LLMs, In-Context Learning and Chain-of-Thought prompting have become two particularly popular research topics in this field. In this paper, we introduce a new prompting strategy called Hybrid prompt strategy and Retrieval of Thought for TextTableQA. Through In-Context Learning, we prompt the model to develop the ability of retrieval thinking when dealing with hybrid data. Our method achieves superior performance compared to the fully-supervised SOTA on the MultiHiertt dataset in the few-shot setting.