Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 147 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 188 tok/s Pro
GPT OSS 120B 398 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Dimension Reduction of Distributionally Robust Optimization Problems (2504.06381v1)

Published 8 Apr 2025 in math.OC and q-fin.RM

Abstract: We study distributionally robust optimization (DRO) problems with uncertainty sets consisting of high dimensional random vectors that are close in the multivariate Wasserstein distance to a reference random vector. We give conditions under which the images of these sets under scalar-valued aggregation functions are equal to or contained in uncertainty sets of univariate random variables defined via a univariate Wasserstein distance. This allows to rewrite or bound high-dimensional DRO problems with simpler DRO problems over the space of univariate random variables. We generalize the results to uncertainty sets defined via the Bregman-Wasserstein divergence and the max-sliced Wasserstein and Bregman-Wasserstein divergence. The max-sliced divergences allow us to jointly model distributional uncertainty around the reference random vector and uncertainty in the aggregation function. Finally, we derive explicit bounds for worst-case risk measures that belong to the class of signed Choquet integrals.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: