Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Poster: On the Feasibility of Training Neural Networks with Visibly Watermarked Dataset (1902.10854v3)

Published 28 Feb 2019 in cs.CR

Abstract: As there are increasing needs of sharing data for machine learning, there is growing attention for the owners of the data to claim the ownership. Visible watermarking has been an effective way to claim the ownership of visual data, yet the visibly watermarked images are not regarded as a primary source for learning visual recognition models due to the lost visual information by in the watermark and the possibility of an attack to remove the watermarks. To make the watermarked images better suited for machine learning with less risk of removal, we propose DeepStamp, a watermarking framework that, given a watermarking image and a trained network for image classification, learns to synthesize a watermarked image that are human-perceptible, robust to removals, and able to be used as training images for classification with minimal accuracy loss. To achieve the goal, we employ the generative multi-adversarial network (GMAN). In experiments with CIFAR10, we show that the DeepStamp learn to transform a watermark to be embedded in each image and the watermarked images can be used to train networks.

Summary

We haven't generated a summary for this paper yet.