Interaction between example selection and scale in many-shot in-context learning for extremely low-resource MT

Determine how in-context example selection strategies interact with the number of in-context examples in the many-shot in-context learning setting for machine translation involving extremely low-resource languages, specifically characterizing how scaling the number of demonstrations affects translation quality under different selection methods.

Background

In-context learning performance in machine translation is highly sensitive to which examples are provided, and relevance-based retrieval has been shown to improve few-shot prompting for high-resource languages. Recently, many-shot prompting scales the number of demonstrations into the hundreds or thousands, but its benefits and costs vary with language and setup.

For extremely low-resource languages, effective retrieval is often weaker and the dynamics of adding many examples may differ from high-resource cases. The paper highlights that it is not yet established how example selection interacts with scaling the number of in-context examples in this many-shot regime for these languages.

References

However, it remains unclear how example selection interacts with scale in the many-shot setting, particularly for extremely LRLs.

An Empirical Study of Many-Shot In-Context Learning for Machine Translation of Low-Resource Languages  (2604.02596 - Lu et al., 3 Apr 2026) in Section 4 (Related Work)