Exploring Unsupervised Pretraining and Sentence Structure Modelling for Winograd Schema Challenge (1904.09705v1)
Abstract: Winograd Schema Challenge (WSC) was proposed as an AI-hard problem in testing computers' intelligence on common sense representation and reasoning. This paper presents the new state-of-theart on WSC, achieving an accuracy of 71.1%. We demonstrate that the leading performance benefits from jointly modelling sentence structures, utilizing knowledge learned from cutting-edge pretraining models, and performing fine-tuning. We conduct detailed analyses, showing that fine-tuning is critical for achieving the performance, but it helps more on the simpler associative problems. Modelling sentence dependency structures, however, consistently helps on the harder non-associative subset of WSC. Analysis also shows that larger fine-tuning datasets yield better performances, suggesting the potential benefit of future work on annotating more Winograd schema sentences.
- Xiaodan Zhu (94 papers)
- Zhen-Hua Ling (114 papers)
- Zhan Shi (84 papers)
- Quan Liu (116 papers)
- Si Wei (19 papers)
- Yu-ping Ruan (12 papers)