Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Constructing Multilingual Code Search Dataset Using Neural Machine Translation (2306.15604v1)

Published 27 Jun 2023 in cs.CL and cs.SE

Abstract: Code search is a task to find programming codes that semantically match the given natural language queries. Even though some of the existing datasets for this task are multilingual on the programming language side, their query data are only in English. In this research, we create a multilingual code search dataset in four natural and four programming languages using a neural machine translation model. Using our dataset, we pre-train and fine-tune the Transformer-based models and then evaluate them on multiple code search test sets. Our results show that the model pre-trained with all natural and programming language data has performed best in most cases. By applying back-translation data filtering to our dataset, we demonstrate that the translation quality affects the model's performance to a certain extent, but the data size matters more.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ryo Sekizawa (3 papers)
  2. Nan Duan (172 papers)
  3. Shuai Lu (90 papers)
  4. Hitomi Yanaka (29 papers)