Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Modern Baselines for SPARQL Semantic Parsing (2204.12793v3)

Published 27 Apr 2022 in cs.IR and cs.CL

Abstract: In this work, we focus on the task of generating SPARQL queries from natural language questions, which can then be executed on Knowledge Graphs (KGs). We assume that gold entity and relations have been provided, and the remaining task is to arrange them in the right order along with SPARQL vocabulary, and input tokens to produce the correct SPARQL query. Pre-trained LLMs (PLMs) have not been explored in depth on this task so far, so we experiment with BART, T5 and PGNs (Pointer Generator Networks) with BERT embeddings, looking for new baselines in the PLM era for this task, on DBpedia and Wikidata KGs. We show that T5 requires special input tokenisation, but produces state of the art performance on LC-QuAD 1.0 and LC-QuAD 2.0 datasets, and outperforms task-specific models from previous works. Moreover, the methods enable semantic parsing for questions where a part of the input needs to be copied to the output query, thus enabling a new paradigm in KG semantic parsing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Debayan Banerjee (12 papers)
  2. Pranav Ajit Nair (6 papers)
  3. Jivat Neet Kaur (7 papers)
  4. Ricardo Usbeck (36 papers)
  5. Chris Biemann (78 papers)
Citations (25)
X Twitter Logo Streamline Icon: https://streamlinehq.com