Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Structural Information Preserving for Graph-to-Text Generation (2102.06749v1)

Published 12 Feb 2021 in cs.CL

Abstract: The task of graph-to-text generation aims at producing sentences that preserve the meaning of input graphs. As a crucial defect, the current state-of-the-art models may mess up or even drop the core structural information of input graphs when generating outputs. We propose to tackle this problem by leveraging richer training signals that can guide our model for preserving input information. In particular, we introduce two types of autoencoding losses, each individually focusing on different aspects (a.k.a. views) of input graphs. The losses are then back-propagated to better calibrate our model via multi-task training. Experiments on two benchmarks for graph-to-text generation show the effectiveness of our approach over a state-of-the-art baseline. Our code is available at \url{http://github.com/Soistesimmer/AMR-multiview}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Linfeng Song (76 papers)
  2. Ante Wang (14 papers)
  3. Jinsong Su (96 papers)
  4. Yue Zhang (620 papers)
  5. Kun Xu (277 papers)
  6. Yubin Ge (18 papers)
  7. Dong Yu (329 papers)
Citations (52)

Summary

We haven't generated a summary for this paper yet.