Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LISA: Localized Image Stylization with Audio via Implicit Neural Representation (2211.11381v1)

Published 21 Nov 2022 in cs.CV, cs.MM, cs.SD, and eess.AS

Abstract: We present a novel framework, Localized Image Stylization with Audio (LISA) which performs audio-driven localized image stylization. Sound often provides information about the specific context of the scene and is closely related to a certain part of the scene or object. However, existing image stylization works have focused on stylizing the entire image using an image or text input. Stylizing a particular part of the image based on audio input is natural but challenging. In this work, we propose a framework that a user provides an audio input to localize the sound source in the input image and another for locally stylizing the target object or scene. LISA first produces a delicate localization map with an audio-visual localization network by leveraging CLIP embedding space. We then utilize implicit neural representation (INR) along with the predicted localization map to stylize the target object or scene based on sound information. The proposed INR can manipulate the localized pixel values to be semantically consistent with the provided audio input. Through a series of experiments, we show that the proposed framework outperforms the other audio-guided stylization methods. Moreover, LISA constructs concise localization maps and naturally manipulates the target object or scene in accordance with the given audio input.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Seung Hyun Lee (10 papers)
  2. Chanyoung Kim (14 papers)
  3. Wonmin Byeon (27 papers)
  4. Sang Ho Yoon (10 papers)
  5. Jinkyu Kim (51 papers)
  6. Sangpil Kim (35 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.