Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Metaphors in Pre-Trained Language Models: Probing and Generalization Across Datasets and Languages (2203.14139v1)

Published 26 Mar 2022 in cs.CL

Abstract: Human languages are full of metaphorical expressions. Metaphors help people understand the world by connecting new concepts and domains to more familiar ones. Large pre-trained LLMs (PLMs) are therefore assumed to encode metaphorical knowledge useful for NLP systems. In this paper, we investigate this hypothesis for PLMs, by probing metaphoricity information in their encodings, and by measuring the cross-lingual and cross-dataset generalization of this information. We present studies in multiple metaphor detection datasets and in four languages (i.e., English, Spanish, Russian, and Farsi). Our extensive experiments suggest that contextual representations in PLMs do encode metaphorical knowledge, and mostly in their middle layers. The knowledge is transferable between languages and datasets, especially when the annotation is consistent across training and testing sets. Our findings give helpful insights for both cognitive and NLP scientists.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ehsan Aghazadeh (7 papers)
  2. Mohsen Fayyaz (31 papers)
  3. Yadollah Yaghoobzadeh (34 papers)
Citations (47)

Summary

Metaphors in Pre-Trained LLMs: A Probing Perspective

The exploration of metaphorical understanding in pre-trained LLMs (PLMs) offers insightful implications for NLP and cognitive science. This paper presents a systematic examination of metaphorical knowledge encoded in PLMs across different datasets and languages, with a focus on probing the models and analyzing their generalization capabilities. The research employs established probing techniques and assesses the transferability of metaphorical knowledge in multilingual settings.

Methodology and Experiments

The authors utilize both conventional edge probing and Minimum Description Length (MDL) probing to scrutinize the encapsulation of metaphorical intelligence in PLMs such as BERT, RoBERTa, and ELECTRA. These methods are applied to four metaphor detection datasets: LCC, TroFi, VUA POS, and VUA Verbs, and the experiments are conducted in English, Spanish, Russian, and Farsi. By evaluating layer-wise distribution of metaphorical information, the paper identifies that PLMs predominantly encode this information in their middle layers.

Generalization capabilities are tested through cross-lingual and cross-dataset experiments. For cross-lingual investigations, XLM-R, a robust multilingual PLM, is employed to probe metaphorical understanding across languages. For cross-dataset analysis, the transferability of metaphoric knowledge is evaluated by training and testing on different metaphor datasets.

Key Findings

  1. Encoding of Metaphorical Knowledge: The paper confirms that PLMs do encapsulate metaphorical knowledge, with a notable concentration in the middle layers. This finding aligns well with the hypothesis that initial layers capture lexical semantics and later layers encode sentence-level understanding.
  2. Cross-lingual Transferability: Results show significant transferability of metaphorical representations across languages using XLM-R, demonstrating the potential effectiveness of multilingual PLMs in capturing language-universal metaphorical knowledge.
  3. Cross-dataset Generalization: While PLMs exhibit some level of generalization across datasets, this capability is substantially lower than cross-lingual generalization. The differences in dataset annotations and metaphorical definitions likely contribute to this limitation.

Implications and Future Directions

The implications of this research are twofold. Practically, the demonstrated ability of PLMs to encode and transfer metaphorical knowledge suggests an enhanced toolkit for developers in NLP applications, contributing to improved machine translation and sentiment analysis systems that can handle metaphorical language more adeptly. Theoretically, the findings support linguistic theories underlying metaphor processing and offer pathways to align PLM training with cognitive models of language understanding.

Future research could explore optimizing model architectures for metaphor detection, exploring how figurative language handling impacts PLM-based text generation systems. Additionally, cross-cultural studies could enrich understanding around the universality and variability of metaphors in human cognition and language use, leveraging the multilingual nature of large-scale PLMs. Such investigations could extend the knowledge frontier at the intersection of NLP and cognitive linguistics, propelling advancements in both domains.

Youtube Logo Streamline Icon: https://streamlinehq.com