Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gloss-free Sign Language Translation: Improving from Visual-Language Pretraining (2307.14768v1)

Published 27 Jul 2023 in cs.CV

Abstract: Sign Language Translation (SLT) is a challenging task due to its cross-domain nature, involving the translation of visual-gestural language to text. Many previous methods employ an intermediate representation, i.e., gloss sequences, to facilitate SLT, thus transforming it into a two-stage task of sign language recognition (SLR) followed by sign language translation (SLT). However, the scarcity of gloss-annotated sign language data, combined with the information bottleneck in the mid-level gloss representation, has hindered the further development of the SLT task. To address this challenge, we propose a novel Gloss-Free SLT based on Visual-Language Pretraining (GFSLT-VLP), which improves SLT by inheriting language-oriented prior knowledge from pre-trained models, without any gloss annotation assistance. Our approach involves two stages: (i) integrating Contrastive Language-Image Pre-training (CLIP) with masked self-supervised learning to create pre-tasks that bridge the semantic gap between visual and textual representations and restore masked sentences, and (ii) constructing an end-to-end architecture with an encoder-decoder-like structure that inherits the parameters of the pre-trained Visual Encoder and Text Decoder from the first stage. The seamless combination of these novel designs forms a robust sign language representation and significantly improves gloss-free sign language translation. In particular, we have achieved unprecedented improvements in terms of BLEU-4 score on the PHOENIX14T dataset (>+5) and the CSL-Daily dataset (>+3) compared to state-of-the-art gloss-free SLT methods. Furthermore, our approach also achieves competitive results on the PHOENIX14T dataset when compared with most of the gloss-based methods. Our code is available at https://github.com/zhoubenjia/GFSLT-VLP.

Gloss-Free Sign Language Translation: A Visual-Language Pretraining Approach

The paper "Gloss-free Sign Language Translation: Improving from Visual-Language Pretraining" addresses the demanding task of Sign Language Translation (SLT), which involves the translation of visual-gestural language into spoken language text. Traditionally, SLT methodologies have relied on intermediate gloss annotations, a practice that necessitates a two-stage approach comprising sign language recognition (sign-to-gloss) and subsequent gloss-based sign language translation (gloss-to-text). This reliance poses challenges due to the labor-intensive nature of gloss annotation and the formation of an information bottleneck at the gloss level.

The researchers in this paper have developed an innovative gloss-free SLT methodology. Their approach is rooted in Visual-Language Pretraining (VLP), drawing perceptive connections from recent advances with Contrastive Language-Image Pre-training (CLIP). The model architecture entails two crucial stages: first, employing a pretask utilizing CLIP coupled with masked self-supervised learning to bridge the semantic gap between visual signs and textual representations; and second, constructing a robust end-to-end translation architecture that effectively inherits parameters from the initial pre-training phase.

Despite forgoing gloss annotations, the proposed method—referred to as GFSLT-VLP—manages to achieve significant improvements in translation metrics. Notably, it outperforms several existing state-of-the-art gloss-free methods, with demonstrated increases in BLEU-4 scores by at least 5 points on the PHOENIX14T dataset and by 3 points on the CSL-Daily dataset. Furthermore, the method delivers competitive translation accuracy comparable to many gloss-based methods, which is a remarkable achievement given the theoretical constraints associated with bypassing the gloss stage entirely.

The implications of this research are multifaceted. Practically, the elimination of the gloss annotation dependency considerably enhances the scalability of SLT models, enabling deployment across broader linguistic datasets without the excessive labor costs tied to gloss generation. From a theoretical standpoint, the use of VLP in SLT presents an exciting venture into aligning multimodal semantic spaces, a step that can potentially accelerate advancements in broader cross-modal translation tasks involving other visual and textual data forms.

Looking towards future enhancements, the paper advocates exploring large-scale pretraining on extensive SLT datasets, which could further leverage the capabilities of the VLP framework. Moreover, as the paper illustrates, the flexibility of this method makes it well-positioned to accommodate and benefit from future advancements in LLMs and visual representation learning, promising continued contributions to the evolution of AI applications in accessible communication technologies for the deaf community.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Benjia Zhou (12 papers)
  2. Zhigang Chen (102 papers)
  3. Albert Clapés (14 papers)
  4. Jun Wan (79 papers)
  5. Yanyan Liang (29 papers)
  6. Sergio Escalera (127 papers)
  7. Zhen Lei (205 papers)
  8. Du Zhang (9 papers)
Citations (32)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub

Youtube Logo Streamline Icon: https://streamlinehq.com