Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MELA: Multilingual Evaluation of Linguistic Acceptability (2311.09033v3)

Published 15 Nov 2023 in cs.CL and cs.AI

Abstract: In this work, we present the largest benchmark to date on linguistic acceptability: Multilingual Evaluation of Linguistic Acceptability -- MELA, with 46K samples covering 10 languages from a diverse set of language families. We establish LLM baselines on this benchmark, and investigate cross-lingual transfer in acceptability judgements with XLM-R. In pursuit of multilingual interpretability, we conduct probing experiments with fine-tuned XLM-R to explore the process of syntax capability acquisition. Our results show that GPT-4o exhibits a strong multilingual ability, outperforming fine-tuned XLM-R, while open-source multilingual models lag behind by a noticeable gap. Cross-lingual transfer experiments show that transfer in acceptability judgment is non-trivial: 500 Icelandic fine-tuning examples lead to 23 MCC performance in a completely unrelated language -- Chinese. Results of our probing experiments indicate that training on MELA improves the performance of XLM-R on syntax-related tasks. Our data is available at https://github.com/sjtu-compling/MELA.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ziyin Zhang (16 papers)
  2. Yikang Liu (20 papers)
  3. Weifang Huang (3 papers)
  4. Junyu Mao (5 papers)
  5. Rui Wang (996 papers)
  6. Hai Hu (23 papers)
Citations (3)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub