Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 173 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 20 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 76 tok/s Pro
Kimi K2 202 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

With Ears to See and Eyes to Hear: Sound Symbolism Experiments with Multimodal Large Language Models (2409.14917v2)

Published 23 Sep 2024 in cs.CL

Abstract: Recently, LLMs and Vision LLMs (VLMs) have demonstrated aptitude as potential substitutes for human participants in experiments testing psycholinguistic phenomena. However, an understudied question is to what extent models that only have access to vision and text modalities are able to implicitly understand sound-based phenomena via abstract reasoning from orthography and imagery alone. To investigate this, we analyse the ability of VLMs and LLMs to demonstrate sound symbolism (i.e., to recognise a non-arbitrary link between sounds and concepts) as well as their ability to "hear" via the interplay of the language and vision modules of open and closed-source multimodal models. We perform multiple experiments, including replicating the classic Kiki-Bouba and Mil-Mal shape and magnitude symbolism tasks, and comparing human judgements of linguistic iconicity with that of LLMs. Our results show that VLMs demonstrate varying levels of agreement with human labels, and more task information may be required for VLMs versus their human counterparts for in silico experimentation. We additionally see through higher maximum agreement levels that Magnitude Symbolism is an easier pattern for VLMs to identify than Shape Symbolism, and that an understanding of linguistic iconicity is highly dependent on model size.

Citations (1)

Summary

  • The paper demonstrates that multimodal models can infer sound symbolism from text and imagery, though with less consistency than human judgment.
  • It shows that closed-source models like GPT-4 perform better in Kiki-Bouba and Mil-Mal tasks, especially with enhanced context prompts.
  • The findings suggest that incorporating explicit sound-symbolism training can improve model accuracy in psycholinguistic applications.

Sound Symbolism Experiments with Multimodal LLMs

The paper "With Ears to See and Eyes to Hear: Sound Symbolism Experiments with Multimodal LLMs" by Loakman, Li, and Lin explores the augmented capabilities of Vision LLMs (VLMs) and LLMs to grasp sound-based phenomena through abstract reasoning derived from orthography and imagery alone. This line of inquiry is directed towards understanding if such models, having access only to vision and text modalities, can replicate human-like characteristics when interpreting sound symbolism. The paper focuses on the classical Kiki-Bouba shape symbolism and Mil-Mal magnitude symbolism tasks along with comparing human judgements of linguistic iconicity with those of LLMs.

Analysis of Classic Psycholinguistic Phenomena

Sound symbolism denotes a non-arbitrary relationship between speech sounds and the meanings of the words they constitute. The research leverages LLMs and VLMs to analyze sound symbolism by conducting a series of psycholinguistic tasks designed to test the models' abilities to implicitly understand these phenomena. The experiments considered include:

  1. Shape Symbolism (Kiki-Bouba Effect): This experiment involves associating pseudowords with shapes based on their properties, such as spikiness or roundness. The results for this task indicated that closed-source models like GPT-4 showed higher levels of agreement with human judgment, especially when models were provided with additional task-specific prompts. Although none of the models consistently aligned with human responses, this discrepancy might be attributed to factors like data contamination or positional biases.
  2. Magnitude Symbolism (Mil-Mal Effect): Magnitude symbolism involves associating certain vowels with perceived physical size (e.g., "Mil" for smaller entities and "Mal" for larger entities). Here, the closed-source GPT-4 again displayed higher accuracy compared to open-source counterparts like LLaVA models. Notably, when provided with additional task-context, the models performed significantly better, indicating a fundamental understanding of the relationship between sound and size.
  3. Iconicity Ratings: This task sought to compare LLM judgments of iconicity against human ratings by examining linguistic iconicity, which measures the extent to which a word’s form represents its concept. Various models, including GPT-4, GPT-3.5-Turbo, and different versions of LLaMA-2, were evaluated. The findings highlighted a positive correlation between model size and the ability to emulate human judgments, with GPT-4 demonstrating the highest levels of agreement.

Implications and Future Directions

The paper reveals that VLMs and LLMs exhibit varying degrees of human-like sound symbolism understanding based on the context provided and the size of the models. These results suggest that models can implicitly learn sound symbolism from orthographic sequences present in the training data, although not as effectively as humans due to missing auditory data and more explicit training focused on sound attributes. This capability has significant implications for advancing tasks in sentiment analysis, creative content generation, and marketing.

From a theoretical standpoint, these results underline the utility of multimodal training data in fostering more comprehensive language understanding within models. For future research, explicit pre-training on sound-symbolism-centric datasets could significantly augment model performance in related tasks. Additionally, exploring techniques to enhance task-specific context-awareness within prompts could optimize existing models for various applications, leading to more nuanced and versatile NLP systems.

Conclusion

The paper offers an insightful exploration into the extent to which modern VLMs and LLMs can mimic human sound symbolism without direct auditory input. Through meticulous experiments, it elucidates the promising yet limited capabilities of these models, paving the way for further enhancements in multimodal AI research. The findings advocate for a balanced approach integrating sound-symbolism-focused training and refined task prompts to better align model outputs with human perception in psycholinguistic domains.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 25 likes.

Upgrade to Pro to view all of the tweets about this paper: