Dice Question Streamline Icon: https://streamlinehq.com

Impact of Source-Only Fine-Tuning on Target Performance

Establish whether fine-tuning large language models solely on source-language splits of Year-ECLeKTic or similar knowledge-intensive datasets increases target-language accuracy and source–target agreement without any training on target-language data.

Information Square Streamline Icon: https://streamlinehq.com

Background

Based on their theoretical implications, the authors hypothesize that improving source confidence via overfitting could reduce cross-lingual gaps; they perform preliminary fine-tuning on the source split but do not achieve conclusive evidence.

This leaves open the question of whether source-only fine-tuning reliably yields improvements in target accuracy and agreement, motivating more controlled and powerful experiments.

References

As a result, we do not have a conclusive evidence of the question we attempted to validate: improving source alone also improves target?

Rethinking Cross-lingual Gaps from a Statistical Viewpoint (2510.15551 - Piratla et al., 17 Oct 2025) in Appendix: Fine-Tuning Experiments (Section: appendix:sft_expts)