Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Question Answering by Commonsense-Based Pre-Training (1809.03568v3)

Published 5 Sep 2018 in cs.CL

Abstract: Although neural network approaches achieve remarkable success on a variety of NLP tasks, many of them struggle to answer questions that require commonsense knowledge. We believe the main reason is the lack of commonsense \mbox{connections} between concepts. To remedy this, we provide a simple and effective method that leverages external commonsense knowledge base such as ConceptNet. We pre-train direct and indirect relational functions between concepts, and show that these pre-trained functions could be easily added to existing neural network models. Results show that incorporating commonsense-based function improves the baseline on three question answering tasks that require commonsense reasoning. Further analysis shows that our system \mbox{discovers} and leverages useful evidence from an external commonsense knowledge base, which is missing in existing neural network models and help derive the correct answer.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Wanjun Zhong (49 papers)
  2. Duyu Tang (65 papers)
  3. Nan Duan (172 papers)
  4. Ming Zhou (182 papers)
  5. Jiahai Wang (31 papers)
  6. Jian Yin (67 papers)
Citations (60)