Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Simulate Natural Language Feedback for Interactive Semantic Parsing (2305.08195v2)

Published 14 May 2023 in cs.CL

Abstract: Interactive semantic parsing based on natural language (NL) feedback, where users provide feedback to correct the parser mistakes, has emerged as a more practical scenario than the traditional one-shot semantic parsing. However, prior work has heavily relied on human-annotated feedback data to train the interactive semantic parser, which is prohibitively expensive and not scalable. In this work, we propose a new task of simulating NL feedback for interactive semantic parsing. We accompany the task with a novel feedback evaluator. The evaluator is specifically designed to assess the quality of the simulated feedback, based on which we decide the best feedback simulator from our proposed variants. On a text-to-SQL dataset, we show that our feedback simulator can generate high-quality NL feedback to boost the error correction ability of a specific parser. In low-data settings, our feedback simulator can help achieve comparable error correction performance as trained using the costly, full set of human annotations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hao Yan (109 papers)
  2. Saurabh Srivastava (14 papers)
  3. Yintao Tai (2 papers)
  4. Sida I. Wang (20 papers)
  5. Wen-tau Yih (84 papers)
  6. Ziyu Yao (44 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.