Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Is Translation All You Need? A Study on Solving Multilingual Tasks with Large Language Models (2403.10258v2)

Published 15 Mar 2024 in cs.CL

Abstract: LLMs have demonstrated multilingual capabilities; yet, they are mostly English-centric due to the imbalanced training corpora. Existing works leverage this phenomenon to improve their multilingual performances through translation, primarily on NLP tasks. This work extends the evaluation from NLP tasks to real user queries and from English-centric LLMs to non-English-centric LLMs. While translation into English can help improve the performance of multilingual NLP tasks for English-centric LLMs, it may not be optimal for all scenarios. For culture-related tasks that need deep language understanding, prompting in the native language tends to be more promising as it better captures the nuances of culture and language. Our experiments reveal varied behaviors among different LLMs and tasks in the multilingual context. Therefore, we advocate for more comprehensive multilingual evaluation and more efforts toward developing multilingual LLMs beyond English-centric ones.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Chaoqun Liu (38 papers)
  2. Wenxuan Zhang (75 papers)
  3. Yiran Zhao (26 papers)
  4. Anh Tuan Luu (69 papers)
  5. Lidong Bing (144 papers)
Citations (6)