Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revealing the Parallel Multilingual Learning within Large Language Models (2403.09073v2)

Published 14 Mar 2024 in cs.CL
Revealing the Parallel Multilingual Learning within Large Language Models

Abstract: In this study, we reveal an in-context learning (ICL) capability of multilingual LLMs: by translating the input to several languages, we provide Parallel Input in Multiple Languages (PiM) to LLMs, which significantly enhances their comprehension abilities. To test this capability, we design extensive experiments encompassing 8 typical datasets, 7 languages and 8 state-of-the-art multilingual LLMs. Experimental results show that (1) incorporating more languages help PiM surpass the conventional ICL further; (2) even combining with the translations that are inferior to baseline performance can also help. Moreover, by examining the activated neurons in LLMs, we discover a counterintuitive but interesting phenomenon. Contrary to the common thought that PiM would activate more neurons than monolingual input to leverage knowledge learned from diverse languages, PiM actually inhibits neurons and promotes more precise neuron activation especially when more languages are added. This phenomenon aligns with the neuroscience insight about synaptic pruning, which removes less used neural connections, strengthens remainders, and then enhances brain intelligence.

LLMs are Parallel Multilingual Learners

Introduction

In this paper, we explore the in-context learning (ICL) capabilities of LLMs for processing and understanding information provided in multiple languages simultaneously. This work introduces a novel prompting approach, Parallel Input in Multiple Languages (PiM), which significantly enhances the comprehension abilities of multilingual LLMs by augmenting the standard input with translations of the task's prompt into several languages. Through extensive experiments across a diverse set of datasets, languages, and state-of-the-art multilingual LLMs, this paper demonstrates the efficacy of PiM in improving model performance across a variety of tasks, including machine translation, language inference, reading comprehension, text simplification, and abstractive summarization.

Parallel Input in Multiple Languages

The paper presents PiM as a method to leverage the inherent capability of multilingual LLMs to process inputs in multiple languages. PiM involves translating the original input into several languages and presenting these translations alongside the original input to the LLM. This approach is theorized to enrich the context available to the model, thereby improving its performance. The hypothesis is substantiated by significant improvements observed across eight datasets, seven languages, and eight leading multilingual LLMs. Notably, PiM demonstrates effectiveness even when translations do not outperform direct translations in baseline scenarios, suggesting a robust method to enhance multilingual model performance.

Insights and Theoretical Implications

A counterintuitive discovery made through neuron activation analysis in LLMs suggests that, contrary to expectations, PiM does not necessarily increase the number of activated neurons. Instead, it inhibits neurons while promoting more precise neuron activation, especially with the addition of more languages to the input. This observation indicates a potential optimization in how LLMs access and utilize multilingual knowledge, aligning with processes of synaptic pruning observed in neurological studies. These findings suggest that PiM's effectiveness may stem from inducing a more efficient use of the model's neural network, emphasizing quality over quantity in neuron activation.

Practical Applications and Future Directions

The paper evidences the broad applicability of PiM across various NLP tasks and its compatibility with multiple LLM architectures, from 7B to 176B parameters. The success of PiM in improving translation tasks, even with machine-translated inputs, opens new pathways for its use in enhancing LLM performance in real-world scenarios. Furthermore, the paper highlights an intriguing direction for future research on understanding neuron activation patterns in LLMs and their relation to learning processes in human brains. Given the effectiveness of PiM, further exploration into tailored prompting strategies for different types of tasks and languages could yield additional gains in model performance and efficiency.

Conclusions

This research contributes significantly to the field by demonstrating a simple yet effective strategy to improve the performance of multilingual LLMs across a range of tasks. By adopting PiM, the paper not only provides a practical method for leveraging the multilingual capabilities of LLMs but also offers new insights into the optimization of neural networks for multilingual understanding. The revelations regarding neuron activation patterns offer a fascinating glimpse into the potential analogs between artificial and biological learning processes, presenting an exciting avenue for interdisciplinary research bridging AI and neurosciences.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Yongyu Mu (15 papers)
  2. Peinan Feng (2 papers)
  3. Zhiquan Cao (2 papers)
  4. Yuzhang Wu (4 papers)
  5. Bei Li (51 papers)
  6. Chenglong Wang (80 papers)
  7. Tong Xiao (119 papers)
  8. Kai Song (21 papers)
  9. Tongran Liu (12 papers)
  10. Chunliang Zhang (12 papers)
  11. Jingbo Zhu (79 papers)
Citations (1)