Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Represent Bilingual Dictionaries (1808.03726v3)

Published 10 Aug 2018 in cs.CL, cs.AI, and cs.LG

Abstract: Bilingual word embeddings have been widely used to capture the similarity of lexical semantics in different human languages. However, many applications, such as cross-lingual semantic search and question answering, can be largely benefited from the cross-lingual correspondence between sentences and lexicons. To bridge this gap, we propose a neural embedding model that leverages bilingual dictionaries. The proposed model is trained to map the literal word definitions to the cross-lingual target words, for which we explore with different sentence encoding techniques. To enhance the learning process on limited resources, our model adopts several critical learning strategies, including multi-task learning on different bridges of languages, and joint learning of the dictionary model with a bilingual word embedding model. Experimental evaluation focuses on two applications. The results of the cross-lingual reverse dictionary retrieval task show our model's promising ability of comprehending bilingual concepts based on descriptions, and highlight the effectiveness of proposed learning strategies in improving performance. Meanwhile, our model effectively addresses the bilingual paraphrase identification problem and significantly outperforms previous approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Muhao Chen (159 papers)
  2. Yingtao Tian (32 papers)
  3. Haochen Chen (7 papers)
  4. Kai-Wei Chang (292 papers)
  5. Steven Skiena (49 papers)
  6. Carlo Zaniolo (20 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.