Papers
Topics
Authors
Recent
Search
2000 character limit reached

HiDDeN: Hiding Data With Deep Networks

Published 26 Jul 2018 in cs.CV and cs.LG | (1807.09937v1)

Abstract: Recent work has shown that deep neural networks are highly sensitive to tiny perturbations of input images, giving rise to adversarial examples. Though this property is usually considered a weakness of learned models, we explore whether it can be beneficial. We find that neural networks can learn to use invisible perturbations to encode a rich amount of useful information. In fact, one can exploit this capability for the task of data hiding. We jointly train encoder and decoder networks, where given an input message and cover image, the encoder produces a visually indistinguishable encoded image, from which the decoder can recover the original message. We show that these encodings are competitive with existing data hiding algorithms, and further that they can be made robust to noise: our models learn to reconstruct hidden information in an encoded image despite the presence of Gaussian blurring, pixel-wise dropout, cropping, and JPEG compression. Even though JPEG is non-differentiable, we show that a robust model can be trained using differentiable approximations. Finally, we demonstrate that adversarial training improves the visual quality of encoded images.

Citations (633)

Summary

  • The paper introduces a deep network framework that jointly trains an encoder, decoder, and adversarial discriminator to embed secret messages in images.
  • It achieves competitive bits-per-pixel encoding capacity while maintaining a bit error rate below 10^-5, effectively hindering detection by steganalysis tools.
  • The approach demonstrates robust performance against common image distortions through specialized noise layer training, making it practical for steganography and watermarking applications.

HiDDeN: Hiding Data With Deep Networks

The research paper "HiDDeN: Hiding Data With Deep Networks" presents an innovative use of deep neural networks for data hiding in images. Traditionally, neural networks' sensitivity to minor perturbations in images has been seen as a vulnerability, leading to adversarial attacks. This work flips that perspective, proposing that this sensitivity can be harnessed beneficially for tasks like steganography and digital watermarking.

Methodological Approach

HiDDeN employs a framework of three convolutional neural networks: an encoder, a decoder, and an adversarial discriminator. The encoder embeds a secret message into a cover image, producing an encoded image indistinguishable from the original. The decoder is tasked with reconstructing the message from the encoded image, even when subjected to various distortions. The adversarial discriminator improves the quality and secrecy of the encoding by learning to distinguish between encoded and non-encoded images.

Various noise layers are introduced in the training process to simulate common image distortions such as cropping, blurring, and JPEG compression. This prepares the networks to recover hidden information despite these perturbations. The distorting processes are modeled with differentiable approximations to JPEG compression, allowing robust training despite JPEG's inherent non-differentiability.

Capacity and Secrecy

In steganography, the balance between capacity and secrecy is critical. The HiDDeN model demonstrates comparable bits-per-pixel encoding capacity to traditional algorithms like HUGO and WOW, while maintaining a bit error rate of less than 10-5. A notable advantage of HiDDeN over classical methods is its ability to significantly hinder detection by steganalysis tools when the precise model weights are unknown, showcasing a crucial benefit of deep learning's model diversity.

Robustness in Watermarking

For digital watermarking, robustness takes precedence. HiDDeN's flexible noise layer framework enables training models that withstand a variety of image distortions. The experiments confirm that models trained with specific noise perturbations (specialized models) exhibit high robustness against those perturbations. The combined model, trained with multiple noise kinds concurrently, achieves competitive performance across a range of distortions.

Implications and Future Directions

The paper's findings have profound implications for both theoretical research in neural network perturbations and practical applications in secure communication and copyright protection. The adaptability of HiDDeN suggests that end-to-end learning frameworks could supplant traditional steganography and watermarking algorithms in scenarios demanding dynamic adversarial defenses.

Future research could explore increasing the capacity and robustness of the HiDDeN framework, exploring adaptations for other media types such as audio and video, and incorporating additional forms of noise to enhance model robustness further.

In summary, the HiDDeN framework stands as a foundational work indicating the potential of leveraging neural network susceptibilities for secure and robust data hiding solutions, marking a step forward in the intersection of deep learning and data concealment technologies.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.