Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 94 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 28 tok/s
GPT-5 High 30 tok/s Pro
GPT-4o 91 tok/s
GPT OSS 120B 454 tok/s Pro
Kimi K2 212 tok/s Pro
2000 character limit reached

Robust Data Hiding Using Inverse Gradient Attention (2011.10850v5)

Published 21 Nov 2020 in cs.CV and cs.CR

Abstract: Data hiding is the procedure of encoding desired information into a certain types of cover media (e.g. images) to resist potential noises for data recovery, while ensuring the embedded image has few perceptual perturbations. Recently, with the tremendous successes gained by deep neural networks in various fields, the research on data hiding with deep learning models has attracted an increasing amount of attentions. In deep data hiding models, to maximize the encoding capacity, each pixel of the cover image ought to be treated differently since they have different sensitivities w.r.t. visual quality. The neglecting to consider the sensitivity of each pixel inevitably affects the model's robustness for information hiding. In this paper, we propose a novel deep data hiding scheme with Inverse Gradient Attention (IGA), combining the idea of attention mechanism to endow different attention weights for different pixels. Equipped with the proposed modules, the model can spotlight pixels with more robustness for data hiding. Extensive experiments demonstrate that the proposed model outperforms the mainstream deep learning based data hiding methods on two prevalent datasets under multiple evaluation metrics. Besides, we further identify and discuss the connections between the proposed inverse gradient attention and high-frequency regions within images, which can serve as an informative reference to the deep data hiding research community. The codes are available at: https://github.com/hongleizhang/IGA.

Citations (12)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com