Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text Correspondence (2205.03815v1)

Published 8 May 2022 in cs.CL and cs.AI

Abstract: The logical negation property (LNP), which implies generating different predictions for semantically opposite inputs, is an important property that a trustworthy LLM must satisfy. However, much recent evidence shows that large-size pre-trained LLMs (PLMs) do not satisfy this property. In this paper, we perform experiments using probing tasks to assess PLM's LNP understanding. Unlike previous studies that only examined negation expressions, we expand the boundary of the investigation to lexical semantics. Through experiments, we observe that PLMs violate the LNP frequently. To alleviate the issue, we propose a novel intermediate training task, names meaning-matching, designed to directly learn a meaning-text correspondence, instead of relying on the distributional hypothesis. Through multiple experiments, we find that the task enables PLMs to learn lexical semantic information. Also, through fine-tuning experiments on 7 GLUE tasks, we confirm that it is a safe intermediate task that guarantees a similar or better performance of downstream tasks. Finally, we observe that our proposed approach outperforms our previous counterparts despite its time and resource efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Myeongjun Jang (9 papers)
  2. Frank Mtumbuka (3 papers)
  3. Thomas Lukasiewicz (125 papers)
Citations (8)