2000 character limit reached
Are Transformers a Modern Version of ELIZA? Observations on French Object Verb Agreement (2109.10133v1)
Published 21 Sep 2021 in cs.CL
Abstract: Many recent works have demonstrated that unsupervised sentence representations of neural networks encode syntactic information by observing that neural LLMs are able to predict the agreement between a verb and its subject. We take a critical look at this line of research by showing that it is possible to achieve high accuracy on this agreement task with simple surface heuristics, indicating a possible flaw in our assessment of neural networks' syntactic ability. Our fine-grained analyses of results on the long-range French object-verb agreement show that contrary to LSTMs, Transformers are able to capture a non-trivial amount of grammatical structure.
- Bingzhi Li (6 papers)
- Guillaume Wisniewski (17 papers)
- Benoit Crabbé (2 papers)