Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Smoothly Giving up: Robustness for Simple Models (2302.09114v1)

Published 17 Feb 2023 in cs.LG, cs.IT, and math.IT

Abstract: There is a growing need for models that are interpretable and have reduced energy and computational cost (e.g., in health care analytics and federated learning). Examples of algorithms to train such models include logistic regression and boosting. However, one challenge facing these algorithms is that they provably suffer from label noise; this has been attributed to the joint interaction between oft-used convex loss functions and simpler hypothesis classes, resulting in too much emphasis being placed on outliers. In this work, we use the margin-based $\alpha$-loss, which continuously tunes between canonical convex and quasi-convex losses, to robustly train simple models. We show that the $\alpha$ hyperparameter smoothly introduces non-convexity and offers the benefit of "giving up" on noisy training examples. We also provide results on the Long-Servedio dataset for boosting and a COVID-19 survey dataset for logistic regression, highlighting the efficacy of our approach across multiple relevant domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tyler Sypherd (7 papers)
  2. Nathan Stromberg (7 papers)
  3. Richard Nock (72 papers)
  4. Visar Berisha (34 papers)
  5. Lalitha Sankar (97 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.