HeLM: Highlighted Evidence augmented Language Model for Enhanced Table-to-Text Generation (2311.08896v2)
Abstract: Large models have demonstrated significant progress across various domains, particularly in tasks related to text generation. In the domain of Table to Text, many LLM-based methods currently resort to modifying prompts to invoke public APIs, incurring potential costs and information leaks. With the advent of open-source large models, fine-tuning LLMs has become feasible. In this study, we conducted parameter-efficient fine-tuning on the LLaMA2 model. Distinguishing itself from previous fine-tuning-based table-to-text methods, our approach involves injecting reasoning information into the input by emphasizing table-specific row data. Our model consists of two modules: 1) a table reasoner that identifies relevant row evidence, and 2) a table summarizer that generates sentences based on the highlighted table. To facilitate this, we propose a search strategy to construct reasoning labels for training the table reasoner. On both the FetaQA and QTSumm datasets, our approach achieved state-of-the-art results. Additionally, we observed that highlighting input tables significantly enhances the model's performance and provides valuable interpretability.
- Junyi Bian (6 papers)
- Xiaolei Qin (5 papers)
- Wuhe Zou (3 papers)
- Mengzuo Huang (3 papers)
- Weidong Zhang (41 papers)
- Congyi Luo (1 paper)
- Ke Zhang (264 papers)