Probing Linguistic Information For Logical Inference In Pre-trained Language Models (2112.01753v2)
Abstract: Progress in pre-trained LLMs has led to a surge of impressive results on downstream tasks for natural language understanding. Recent work on probing pre-trained LLMs uncovered a wide range of linguistic properties encoded in their contextualized representations. However, it is unclear whether they encode semantic knowledge that is crucial to symbolic inference methods. We propose a methodology for probing linguistic information for logical inference in pre-trained LLM representations. Our probing datasets cover a list of linguistic phenomena required by major symbolic inference systems. We find that (i) pre-trained LLMs do encode several types of linguistic information for inference, but there are also some types of information that are weakly encoded, (ii) LLMs can effectively learn missing linguistic information through fine-tuning. Overall, our findings provide insights into which aspects of linguistic information for logical inference do LLMs and their pre-training procedures capture. Moreover, we have demonstrated LLMs' potential as semantic and background knowledge bases for supporting symbolic inference methods.
- Zeming Chen (18 papers)
- Qiyue Gao (8 papers)