Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
90 tokens/sec
Gemini 2.5 Pro Premium
54 tokens/sec
GPT-5 Medium
19 tokens/sec
GPT-5 High Premium
18 tokens/sec
GPT-4o
104 tokens/sec
DeepSeek R1 via Azure Premium
78 tokens/sec
GPT OSS 120B via Groq Premium
475 tokens/sec
Kimi K2 via Groq Premium
225 tokens/sec
2000 character limit reached

semantic-features: A User-Friendly Tool for Studying Contextual Word Embeddings in Interpretable Semantic Spaces (2506.06169v1)

Published 6 Jun 2025 in cs.CL and cs.AI

Abstract: We introduce semantic-features, an extensible, easy-to-use library based on Chronis et al. (2023) for studying contextualized word embeddings of LMs by projecting them into interpretable spaces. We apply this tool in an experiment where we measure the contextual effect of the choice of dative construction (prepositional or double object) on the semantic interpretation of utterances (Bresnan, 2007). Specifically, we test whether "London" in "I sent London the letter." is more likely to be interpreted as an animate referent (e.g., as the name of a person) than in "I sent the letter to London." To this end, we devise a dataset of 450 sentence pairs, one in each dative construction, with recipients being ambiguous with respect to person-hood vs. place-hood. By applying semantic-features, we show that the contextualized word embeddings of three masked LLMs show the expected sensitivities. This leaves us optimistic about the usefulness of our tool.

Summary

  • The paper presents a toolkit that projects contextual embeddings into semantic spaces to enhance interpretability.
  • It uses feed-forward models to map BERT embeddings onto dimensions defined by specific semantic norms.
  • The study reveals that models like BERT capture subtle semantic shifts in dative constructions with consistent feature activation patterns.

Analyzing Contextual Word Embeddings through Interpretable Semantic Spaces

The paper entitled "semantic-features: A User-Friendly Tool for Studying Contextual Word Embeddings in Interpretable Semantic Spaces" presents the development and application of a toolkit designed to map contextual word embeddings (CWEs) into interpretable semantic spaces. The primary objective is to facilitate linguistic analysis by projecting embeddings, particularly those from BERT, into semantically meaningful dimensions, guided by specific semantic norms.

Methodological Approach

The authors introduce a library, semantic-features, that is available for public use and is extensible to various LMs. The library capitalizes on the methodology developed by Chronis et al. (2023), which involves projecting CWEs from BERT into a vector space defined by selected semantic norms, using feed-forward models. The semantic-features system includes modules for embedding extraction, model training, and hyperparameter optimization, thereby providing a comprehensive framework for analyzing embeddings. Importantly, while the system is compatible with any LM, it is particularly suited to models that account for bidirectional context, such as BERT and its derivatives.

Case Study: Semantic Interpretation in Dative Constructions

To demonstrate the utility of the semantic-features toolkit, a linguistic case paper is conducted focusing on the semantics of recipient arguments within dative constructions. The paper examines whether contextual embeddings can reflect the difference in semantic interpretation for ditransitive verbs that can take either double object (DO) or prepositional object (PO) constructions. It hypothesizes that the semantic interpretation of the recipient (e.g., "London") varies between being animate in DO constructions and inanimate in PO constructions.

Results and Implications

The analysis reveals that LMs, specifically BERT, RoBERTa, and ALBERT, can indeed capture the semantic nuances between DO and PO constructions. The results show a predictable shift in feature activation, with embedders trained using semantic-features identifying changes in person-hood and place-hood features of recipients across different contexts. This is evidenced by distinct and consistent feature activation patterns observed in the CWEs of the recipients when undergoing projection to the Binder semantic feature space.

The authors' findings suggest that LMs are sensitive to context-dependent semantic variations, which can be effectively studied through projection into interpretable semantic spaces. This capability has significant theoretical implications, providing evidence for the ability of LMs to encode fine-grained semantic distinctions implicit in linguistic alternations. Practically, this tool could enhance research in computational linguistics by providing a means to extract and analyze semantic information from embeddings in a more interpretable form.

Future Directions

This work opens several avenues for future research. Extending the toolkit to accommodate a wider range of LLMs and semantic norms could facilitate broader applications. Furthermore, expanding the case studies to include other linguistic phenomena could yield insights into the generality of these findings. The current system's reliance on BERT-based architectures suggests exploration with autoregressive models, despite their limitations in left-context embeddings. Optimizing the integration of semantic-features into routine computational linguistic analyses could enhance interpretative power and robustness in understanding LLMs.

Conclusion

The paper contributes a valuable toolkit for the linguistic community, bridging the gap between complex contextual embeddings and semantically interpretable analysis. By enabling researchers to paper word embeddings within meaningful semantic spaces, this work not only demonstrates the nuanced capabilities of LMs but also provides practical tools for linguistic research and analysis.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com