Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Role of Output Vocabulary in T2T LMs for SPARQL Semantic Parsing (2305.15108v1)

Published 24 May 2023 in cs.CL

Abstract: In this work, we analyse the role of output vocabulary for text-to-text (T2T) models on the task of SPARQL semantic parsing. We perform experiments within the the context of knowledge graph question answering (KGQA), where the task is to convert questions in natural language to the SPARQL query language. We observe that the query vocabulary is distinct from human vocabulary. LLMs (LMs) are pre-dominantly trained for human language tasks, and hence, if the query vocabulary is replaced with a vocabulary more attuned to the LM tokenizer, the performance of models may improve. We carry out carefully selected vocabulary substitutions on the queries and find absolute gains in the range of 17% on the GrailQA dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Debayan Banerjee (12 papers)
  2. Pranav Ajit Nair (6 papers)
  3. Ricardo Usbeck (36 papers)
  4. Chris Biemann (78 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.