Visually Analyzing Contextualized Embeddings (2009.02554v1)
Abstract: In this paper we introduce a method for visually analyzing contextualized embeddings produced by deep neural network-based LLMs. Our approach is inspired by linguistic probes for natural language processing, where tasks are designed to probe LLMs for linguistic structure, such as parts-of-speech and named entities. These approaches are largely confirmatory, however, only enabling a user to test for information known a priori. In this work, we eschew supervised probing tasks, and advocate for unsupervised probes, coupled with visual exploration techniques, to assess what is learned by LLMs. Specifically, we cluster contextualized embeddings produced from a large text corpus, and introduce a visualization design based on this clustering and textual structure - cluster co-occurrences, cluster spans, and cluster-word membership - to help elicit the functionality of, and relationship between, individual clusters. User feedback highlights the benefits of our design in discovering different types of linguistic structures.
- Matthew Berger (22 papers)