Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 104 tok/s
Gemini 3.0 Pro 36 tok/s Pro
Gemini 2.5 Flash 133 tok/s Pro
Kimi K2 216 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Model Misalignment and Language Change: Traces of AI-Associated Language in Unscripted Spoken English (2508.00238v1)

Published 1 Aug 2025 in cs.CL and cs.AI

Abstract: In recent years, written language, particularly in science and education, has undergone remarkable shifts in word usage. These changes are widely attributed to the growing influence of LLMs, which frequently rely on a distinct lexical style. Divergences between model output and target audience norms can be viewed as a form of misalignment. While these shifts are often linked to using AI directly as a tool to generate text, it remains unclear whether the changes reflect broader changes in the human language system itself. To explore this question, we constructed a dataset of 22.1 million words from unscripted spoken language drawn from conversational science and technology podcasts. We analyzed lexical trends before and after ChatGPT's release in 2022, focusing on commonly LLM-associated words. Our results show a moderate yet significant increase in the usage of these words post-2022, suggesting a convergence between human word choices and LLM-associated patterns. In contrast, baseline synonym words exhibit no significant directional shift. Given the short time frame and the number of words affected, this may indicate the onset of a remarkable shift in language use. Whether this represents natural language change or a novel shift driven by AI exposure remains an open question. Similarly, although the shifts may stem from broader adoption patterns, it may also be that upstream training misalignments ultimately contribute to changes in human language use. These findings parallel ethical concerns that misaligned models may shape social and moral beliefs.

Summary

  • The paper demonstrates that unscripted spoken language shows a moderate, statistically significant increase in AI-associated word usage after ChatGPT's release.
  • The study utilizes podcast transcripts, employing lemmatization, POS-tagging, and statistical tests to contrast pre-2022 and post-2022 speech patterns.
  • Findings reveal the complexity of distinguishing LLM influence from organic language change, with broad implications for model alignment and ethical communication.

Model Misalignment and Language Change: Traces of AI-Associated Language in Unscripted Spoken English

Introduction and Motivation

The paper investigates the extent to which lexical patterns characteristic of LLMs have permeated genuinely human-produced, unscripted spoken English. The central research question is whether the observed post-2022 increase in AI-associated words in written and spoken language reflects mere tool usage (i.e., direct copying of LLM output) or signals a deeper influence on the human language system itself. This distinction is critical for understanding the broader societal and ethical implications of model misalignment, particularly as LLMs are increasingly integrated into communication workflows.

The authors situate their work within the context of rapid, technology-driven language change, drawing parallels to historical shifts induced by the printing press, telephony, and the internet. They note that while sudden spikes in word usage are often linked to real-world events, the recent proliferation of terms such as "delve," "intricate," and "underscore" in academic writing appears to be decoupled from such events and instead correlates with the widespread adoption of LLMs, especially ChatGPT. Figure 1

Figure 1: Whether the sharp post-2022 rise in AI-associated words reflects mere tool usage or a direct influence on the human language system remains an open question with broader societal relevance.

Methodology

To address the challenge of human-authorship indeterminacy in written texts, the paper focuses on unscripted spoken language from conversational science and technology podcasts. The dataset comprises 22.1 million words, balanced across pre-2022 (2019–2021) and post-2022 (2023–2025) periods, with 1:1 episode ratios per podcast to control for regional and varietal differences. Transcripts were either sourced directly or generated using OpenAI Whisper, followed by lemmatization and POS-tagging via spaCy.

The analysis targets 20 AI-associated words previously identified in the literature as overused by LLMs, comparing their occurrences per million (OPM) before and after ChatGPT's release. Baseline synonym words serve as controls. Statistical inference is performed using weighted log-ratio means and z-tests for group-level effects, with chi-square contingency tests for individual lemmata.

Results

The weighted mean analysis reveals a moderate but statistically significant increase in the usage of AI-associated words post-2022 (weighted log-ratio mean=0.210\text{weighted log-ratio mean} = 0.210, z=3.725z = 3.725, p<0.001p < 0.001). Of the 20 target words, 14 show increased usage, with 5 reaching statistical significance ("significant," "align," "strategically," "boast," "surpass"). Conversely, 6 words decrease in frequency, with "crucial" and "realm" showing significant declines. Notably, "delve"—a prototypical LLM-associated term—does not exhibit a significant increase, and "realm" decreases, contradicting expectations based on written language trends.

Baseline synonym words display only a negligible, non-significant overall change (weighted log-ratio mean=0.033\text{weighted log-ratio mean} = 0.033, z=1.277z = 1.277, p>0.05p > 0.05), with increases and decreases roughly balanced. Figure 2

Figure 2: Pre- vs post-2022 logged proportional change for AI-associated words (left) vs baseline words (right).

Discussion

Interpretation of Lexical Shifts

The findings support a rejection of the null hypothesis, indicating a selective convergence of human lexical choices with LLM-associated patterns in unscripted spoken English. However, the magnitude of the effect is moderate, and the directionality is not uniform across all target words. The contrast between sharp spikes in written usage and more subdued changes in spoken language suggests that tool usage may be the primary driver in written domains, while spoken language reflects a slower, more selective adoption.

The lack of significant increase for "delve" and the decrease for "realm" highlight the complexity of attributing causality to LLM influence. The authors caution that many AI-associated words were already trending upward prior to 2022, raising the possibility that LLMs amplify existing language change rather than initiate it. Definitive causal attribution would require longitudinal tracking of individual speakers and their exposure to LLM-generated language.

Implications for Model Alignment and Societal Impact

The observed "seep-in" effect—where repeated exposure to AI-generated language subtly alters human lexical preferences—has direct relevance for model alignment debates. If LLMs encode stylistic or lexical biases not representative of the broader user base, these biases may propagate into human communication norms, raising ethical concerns analogous to those in value alignment, fairness, and bias amplification.

The paper also foregrounds the problem of human-authorship indeterminacy, which complicates linguistic research and any scientific field relying on natural language as a proxy for human cognition. As AI-generated content becomes ubiquitous, distinguishing genuine human language production from tool-assisted output will require increasingly sophisticated methodologies.

Limitations and Future Directions

The dataset is restricted to tech- and science-focused podcasts, likely overrepresenting LLM exposure and not generalizable to the broader population. The authors acknowledge the need for larger, more diverse datasets and advocate for qualitative analyses to elucidate micro-level mechanisms underlying observed macro-level trends. The potential for a self-consuming training loop—where human language influenced by LLMs becomes future training data—warrants further investigation, particularly regarding its impact on linguistic diversity and model robustness.

Conclusion

This paper provides empirical evidence of a moderate, statistically significant increase in the usage of AI-associated words in unscripted spoken English following the release of ChatGPT. While the results suggest a selective convergence of human language with LLM lexical patterns, the distinction between AI-induced and natural language change remains unresolved. The implications extend beyond linguistics to model alignment, ethics, and the methodology of language research itself. Future work should focus on expanding dataset diversity, refining causal inference, and developing robust frameworks for tracking the evolving interplay between human and machine language systems.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 4 tweets and received 220 likes.

Upgrade to Pro to view all of the tweets about this paper: