Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Injecting Numerical Reasoning Skills into Knowledge Base Question Answering Models (2112.06109v2)

Published 12 Dec 2021 in cs.CL

Abstract: Embedding-based methods are popular for Knowledge Base Question Answering (KBQA), but few current models have numerical reasoning skills and thus struggle to answer ordinal constrained questions. This paper proposes a new embedding-based KBQA framework which particularly takes numerical reasoning into account. We present NumericalTransformer on top of NSM, a state-of-the-art embedding-based KBQA model, to create NT-NSM. To enable better training, we propose two pre-training tasks with explicit numerical-oriented loss functions on two generated training datasets and a template-based data augmentation method for enriching ordinal constrained QA dataset. Extensive experiments on KBQA benchmarks demonstrate that with the help of our training algorithm, NT-NSM is empowered with numerical reasoning skills and substantially outperforms the baselines in answering ordinal constrained questions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yu Feng (216 papers)
  2. Jing Zhang (732 papers)
  3. Xiaokang Zhang (42 papers)
  4. Lemao Liu (62 papers)
  5. Cuiping Li (42 papers)
  6. Hong Chen (230 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.