Learning to Reduce: Towards Improving Performance of Large Language Models on Structured Data (2407.02750v1)
Abstract: LLMs have been achieving competent performance on a wide range of downstream tasks, yet existing work shows that inference on structured data is challenging for LLMs. This is because LLMs need to either understand long structured data or select the most relevant evidence before inference, and both approaches are not trivial. This paper proposes a framework, Learning to Reduce, that fine-tunes a LLM with On-Policy Learning to generate a reduced version of an input structured data. When compared to state-of-the-art LLMs like GPT-4, Learning to Reduce not only achieves outstanding performance in reducing the input, but shows generalizability on different datasets. We further show that the model fine-tuned with our framework helps LLMs better perform on table QA tasks especially when the context is longer.
- Younghun Lee (6 papers)
- Sungchul Kim (65 papers)
- Ryan A. Rossi (124 papers)
- Tong Yu (119 papers)
- Xiang Chen (343 papers)