Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploiting Multimodal Reinforcement Learning for Simultaneous Machine Translation (2102.11387v1)

Published 22 Feb 2021 in cs.CL

Abstract: This paper addresses the problem of simultaneous machine translation (SiMT) by exploring two main concepts: (a) adaptive policies to learn a good trade-off between high translation quality and low latency; and (b) visual information to support this process by providing additional (visual) contextual information which may be available before the textual input is produced. For that, we propose a multimodal approach to simultaneous machine translation using reinforcement learning, with strategies to integrate visual and textual information in both the agent and the environment. We provide an exploration on how different types of visual information and integration strategies affect the quality and latency of simultaneous translation models, and demonstrate that visual cues lead to higher quality while keeping the latency low.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Julia Ive (25 papers)
  2. Andy Mingren Li (1 paper)
  3. Yishu Miao (19 papers)
  4. Ozan Caglayan (20 papers)
  5. Pranava Madhyastha (37 papers)
  6. Lucia Specia (68 papers)
Citations (10)