Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
116 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
24 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
35 tokens/sec
2000 character limit reached

Derivative Manipulation for General Example Weighting (1905.11233v10)

Published 27 May 2019 in cs.LG, cs.CV, and stat.ML

Abstract: Real-world large-scale datasets usually contain noisy labels and are imbalanced. Therefore, we propose derivative manipulation (DM), a novel and general example weighting approach for training robust deep models under these adverse conditions. DM has two main merits. First, loss function and example weighting are common techniques in the literature. DM reveals their connection (a loss function does example weighting) and is a replacement of both. Second, despite that a loss defines an example weighting scheme by its derivative, in the loss design, we need to consider whether it is differentiable. Instead, DM is more flexible by directly modifying the derivative so that a loss can be a non-elementary format too. Technically, DM defines an emphasis density function by a derivative magnitude function. DM is generic in that diverse weighting schemes can be derived. Extensive experiments on both vision and language tasks prove DM's effectiveness.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.