Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Language Model for Table Processing: A Survey (2402.05121v3)

Published 4 Feb 2024 in cs.AI and cs.CL

Abstract: Tables, typically two-dimensional and structured to store large amounts of data, are essential in daily activities like database queries, spreadsheet manipulations, web table question answering, and image table information extraction. Automating these table-centric tasks with LLMs or Visual LLMs (VLMs) offers significant public benefits, garnering interest from academia and industry. This survey provides a comprehensive overview of table-related tasks, examining both user scenarios and technical aspects. It covers traditional tasks like table question answering as well as emerging fields such as spreadsheet manipulation and table data analysis. We summarize the training techniques for LLMs and VLMs tailored for table processing. Additionally, we discuss prompt engineering, particularly the use of LLM-powered agents, for various table-related tasks. Finally, we highlight several challenges, including diverse user input when serving and slow thinking using chain-of-thought.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Weizheng Lu (3 papers)
  2. Jing Zhang (730 papers)
  3. Yueguo Chen (11 papers)
  4. Ju Fan (26 papers)
  5. Zihao Fu (17 papers)
  6. Xiaoyong Du (40 papers)
Citations (13)