Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LOGEN: Few-shot Logical Knowledge-Conditioned Text Generation with Self-training (2112.01404v3)

Published 2 Dec 2021 in cs.CL, cs.AI, cs.IR, and cs.LG

Abstract: Natural language generation from structured data mainly focuses on surface-level descriptions, suffering from uncontrollable content selection and low fidelity. Previous works leverage logical forms to facilitate logical knowledge-conditioned text generation. Though achieving remarkable progress, they are data-hungry, which makes the adoption for real-world applications challenging with limited data. To this end, this paper proposes a unified framework for logical knowledge-conditioned text generation in the few-shot setting. With only a few seeds logical forms (e.g., 20/100 shot), our approach leverages self-training and samples pseudo logical forms based on content and structure consistency. Experimental results demonstrate that our approach can obtain better few-shot performance than baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Shumin Deng (65 papers)
  2. Jiacheng Yang (11 papers)
  3. Hongbin Ye (16 papers)
  4. Chuanqi Tan (56 papers)
  5. Mosha Chen (17 papers)
  6. Songfang Huang (51 papers)
  7. Fei Huang (408 papers)
  8. Huajun Chen (198 papers)
  9. Ningyu Zhang (148 papers)
Citations (7)