Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Lipper: Synthesizing Thy Speech using Multi-View Lipreading (1907.01367v1)

Published 28 Jun 2019 in eess.AS, cs.LG, cs.SD, and stat.ML

Abstract: Lipreading has a lot of potential applications such as in the domain of surveillance and video conferencing. Despite this, most of the work in building lipreading systems has been limited to classifying silent videos into classes representing text phrases. However, there are multiple problems associated with making lipreading a text-based classification task like its dependence on a particular language and vocabulary mapping. Thus, in this paper we propose a multi-view lipreading to audio system, namely Lipper, which models it as a regression task. The model takes silent videos as input and produces speech as the output. With multi-view silent videos, we observe an improvement over single-view speech reconstruction results. We show this by presenting an exhaustive set of experiments for speaker-dependent, out-of-vocabulary and speaker-independent settings. Further, we compare the delay values of Lipper with other speechreading systems in order to show the real-time nature of audio produced. We also perform a user study for the audios produced in order to understand the level of comprehensibility of audios produced using Lipper.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yaman Kumar (23 papers)
  2. Rohit Jain (12 papers)
  3. Khwaja Mohd. Salik (1 paper)
  4. Rajiv Ratn Shah (108 papers)
  5. Roger Zimmermann (76 papers)
  6. Yifang Yin (24 papers)
Citations (34)

Summary

We haven't generated a summary for this paper yet.