Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Several Experiments on Investigating Pretraining and Knowledge-Enhanced Models for Natural Language Inference (1904.12104v1)

Published 27 Apr 2019 in cs.CL

Abstract: Natural language inference (NLI) is among the most challenging tasks in natural language understanding. Recent work on unsupervised pretraining that leverages unsupervised signals such as language-model and sentence prediction objectives has shown to be very effective on a wide range of NLP problems. It would still be desirable to further understand how it helps NLI; e.g., if it learns artifacts in data annotation or instead learn true inference knowledge. In addition, external knowledge that does not exist in the limited amount of NLI training data may be added to NLI models in two typical ways, e.g., from human-created resources or an unsupervised pretraining paradigm. We runs several experiments here to investigate whether they help NLI in the same way, and if not,how?

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tianda Li (10 papers)
  2. Xiaodan Zhu (94 papers)
  3. Quan Liu (116 papers)
  4. Qian Chen (264 papers)
  5. Zhigang Chen (102 papers)
  6. Si Wei (19 papers)
Citations (17)