Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-lingual Information Retrieval with BERT (2004.13005v1)

Published 24 Apr 2020 in cs.IR, cs.CL, cs.LG, and stat.ML

Abstract: Multiple neural LLMs have been developed recently, e.g., BERT and XLNet, and achieved impressive results in various NLP tasks including sentence classification, question answering and document ranking. In this paper, we explore the use of the popular bidirectional LLM, BERT, to model and learn the relevance between English queries and foreign-language documents in the task of cross-lingual information retrieval. A deep relevance matching model based on BERT is introduced and trained by finetuning a pretrained multilingual BERT model with weak supervision, using home-made CLIR training data derived from parallel corpora. Experimental results of the retrieval of Lithuanian documents against short English queries show that our model is effective and outperforms the competitive baseline approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zhuolin Jiang (12 papers)
  2. Amro El-Jaroudi (2 papers)
  3. William Hartmann (11 papers)
  4. Damianos Karakos (3 papers)
  5. Lingjun Zhao (12 papers)
Citations (50)