Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Faithful Neural Table-to-Text Generation with Content-Matching Constraints (2005.00969v1)

Published 3 May 2020 in cs.CL

Abstract: Text generation from a knowledge base aims to translate knowledge triples to natural language descriptions. Most existing methods ignore the faithfulness between a generated text description and the original table, leading to generated information that goes beyond the content of the table. In this paper, for the first time, we propose a novel Transformer-based generation framework to achieve the goal. The core techniques in our method to enforce faithfulness include a new table-text optimal-transport matching loss and a table-text embedding similarity loss based on the Transformer model. Furthermore, to evaluate faithfulness, we propose a new automatic metric specialized to the table-to-text generation problem. We also provide detailed analysis on each component of our model in our experiments. Automatic and human evaluations show that our framework can significantly outperform state-of-the-art by a large margin.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zhenyi Wang (27 papers)
  2. Xiaoyang Wang (134 papers)
  3. Bang An (33 papers)
  4. Dong Yu (329 papers)
  5. Changyou Chen (108 papers)
Citations (83)

Summary

We haven't generated a summary for this paper yet.