Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Probing Linguistic Information For Logical Inference In Pre-trained Language Models (2112.01753v2)

Published 3 Dec 2021 in cs.CL, cs.AI, and cs.LG

Abstract: Progress in pre-trained LLMs has led to a surge of impressive results on downstream tasks for natural language understanding. Recent work on probing pre-trained LLMs uncovered a wide range of linguistic properties encoded in their contextualized representations. However, it is unclear whether they encode semantic knowledge that is crucial to symbolic inference methods. We propose a methodology for probing linguistic information for logical inference in pre-trained LLM representations. Our probing datasets cover a list of linguistic phenomena required by major symbolic inference systems. We find that (i) pre-trained LLMs do encode several types of linguistic information for inference, but there are also some types of information that are weakly encoded, (ii) LLMs can effectively learn missing linguistic information through fine-tuning. Overall, our findings provide insights into which aspects of linguistic information for logical inference do LLMs and their pre-training procedures capture. Moreover, we have demonstrated LLMs' potential as semantic and background knowledge bases for supporting symbolic inference methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Zeming Chen (18 papers)
  2. Qiyue Gao (8 papers)
Citations (7)