Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 97 tok/s
Gemini 2.5 Pro 58 tok/s Pro
GPT-5 Medium 38 tok/s
GPT-5 High 37 tok/s Pro
GPT-4o 101 tok/s
GPT OSS 120B 466 tok/s Pro
Kimi K2 243 tok/s Pro
2000 character limit reached

Discrepancy Minimization via Regularization (2211.05509v1)

Published 10 Nov 2022 in cs.DS and cs.DM

Abstract: We introduce a new algorithmic framework for discrepancy minimization based on regularization. We demonstrate how varying the regularizer allows us to re-interpret several breakthrough works in algorithmic discrepancy, ranging from Spencer's theorem [Spencer 1985, Bansal 2010] to Banaszczyk's bounds [Banaszczyk 1998, Bansal-Dadush-Garg 2016]. Using our techniques, we also show that the Beck-Fiala and Koml\'os conjectures are true in a new regime of pseudorandom instances.

Citations (5)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube