Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Silo NLP's Participation at WAT2022 (2208.01296v1)

Published 2 Aug 2022 in cs.CL

Abstract: This paper provides the system description of "Silo NLP's" submission to the Workshop on Asian Translation (WAT2022). We have participated in the Indic Multimodal tasks (English->Hindi, English->Malayalam, and English->Bengali Multimodal Translation). For text-only translation, we trained Transformers from scratch and fine-tuned mBART-50 models. For multimodal translation, we used the same mBART architecture and extracted object tags from the images to use as visual features concatenated with the text sequence. Our submission tops many tasks including English->Hindi multimodal translation (evaluation test), English->Malayalam text-only and multimodal translation (evaluation test), English->Bengali multimodal translation (challenge test), and English->Bengali text-only translation (evaluation test).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Shantipriya Parida (9 papers)
  2. Subhadarshi Panda (3 papers)
  3. Stig-Arne Grönroos (11 papers)
  4. Mark Granroth-Wilding (3 papers)
  5. Mika Koistinen (1 paper)
Citations (2)