Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Zero-shot Transfer Learning for Semantic Parsing (1808.09889v1)

Published 27 Aug 2018 in cs.CL, cs.LG, and stat.ML

Abstract: While neural networks have shown impressive performance on large datasets, applying these models to tasks where little data is available remains a challenging problem. In this paper we propose to use feature transfer in a zero-shot experimental setting on the task of semantic parsing. We first introduce a new method for learning the shared space between multiple domains based on the prediction of the domain label for each example. Our experiments support the superiority of this method in a zero-shot experimental setting in terms of accuracy metrics compared to state-of-the-art techniques. In the second part of this paper we study the impact of individual domains and examples on semantic parsing performance. We use influence functions to this aim and investigate the sensitivity of domain-label classification loss on each example. Our findings reveal that cross-domain adversarial attacks identify useful examples for training even from the domains the least similar to the target domain. Augmenting our training data with these influential examples further boosts our accuracy at both the token and the sequence level.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Javid Dadashkarimi (9 papers)
  2. Alexander Fabbri (11 papers)
  3. Sekhar Tatikonda (33 papers)
  4. Dragomir R. Radev (14 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.