Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploiting Rich Syntactic Information for Semantic Parsing with Graph-to-Sequence Model (1808.07624v1)

Published 23 Aug 2018 in cs.CL and cs.AI

Abstract: Existing neural semantic parsers mainly utilize a sequence encoder, i.e., a sequential LSTM, to extract word order features while neglecting other valuable syntactic information such as dependency graph or constituent trees. In this paper, we first propose to use the \textit{syntactic graph} to represent three types of syntactic information, i.e., word order, dependency and constituency features. We further employ a graph-to-sequence model to encode the syntactic graph and decode a logical form. Experimental results on benchmark datasets show that our model is comparable to the state-of-the-art on Jobs640, ATIS and Geo880. Experimental results on adversarial examples demonstrate the robustness of the model is also improved by encoding more syntactic information.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kun Xu (277 papers)
  2. Lingfei Wu (135 papers)
  3. Zhiguo Wang (100 papers)
  4. Mo Yu (117 papers)
  5. Liwei Chen (26 papers)
  6. Vadim Sheinin (7 papers)
Citations (73)

Summary

We haven't generated a summary for this paper yet.