Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HeLM: Highlighted Evidence augmented Language Model for Enhanced Table-to-Text Generation (2311.08896v2)

Published 15 Nov 2023 in cs.CL

Abstract: Large models have demonstrated significant progress across various domains, particularly in tasks related to text generation. In the domain of Table to Text, many LLM-based methods currently resort to modifying prompts to invoke public APIs, incurring potential costs and information leaks. With the advent of open-source large models, fine-tuning LLMs has become feasible. In this study, we conducted parameter-efficient fine-tuning on the LLaMA2 model. Distinguishing itself from previous fine-tuning-based table-to-text methods, our approach involves injecting reasoning information into the input by emphasizing table-specific row data. Our model consists of two modules: 1) a table reasoner that identifies relevant row evidence, and 2) a table summarizer that generates sentences based on the highlighted table. To facilitate this, we propose a search strategy to construct reasoning labels for training the table reasoner. On both the FetaQA and QTSumm datasets, our approach achieved state-of-the-art results. Additionally, we observed that highlighting input tables significantly enhances the model's performance and provides valuable interpretability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Junyi Bian (6 papers)
  2. Xiaolei Qin (5 papers)
  3. Wuhe Zou (3 papers)
  4. Mengzuo Huang (3 papers)
  5. Weidong Zhang (41 papers)
  6. Congyi Luo (1 paper)
  7. Ke Zhang (264 papers)
Citations (2)