Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Adversarial Robustness of Deep Metric Learning (2102.07265v1)

Published 14 Feb 2021 in cs.LG and cs.AI

Abstract: Deep Metric Learning (DML), a widely-used technique, involves learning a distance metric between pairs of samples. DML uses deep neural architectures to learn semantic embeddings of the input, where the distance between similar examples is small while dissimilar ones are far apart. Although the underlying neural networks produce good accuracy on naturally occurring samples, they are vulnerable to adversarially-perturbed samples that reduce performance. We take a first step towards training robust DML models and tackle the primary challenge of the metric losses being dependent on the samples in a mini-batch, unlike standard losses that only depend on the specific input-output pair. We analyze this dependence effect and contribute a robust optimization formulation. Using experiments on three commonly-used DML datasets, we demonstrate 5-76 fold increases in adversarial accuracy, and outperform an existing DML model that sought out to be robust.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Thomas Kobber Panum (2 papers)
  2. Zi Wang (120 papers)
  3. Pengyu Kan (2 papers)
  4. Earlence Fernandes (23 papers)
  5. Somesh Jha (112 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.