Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 73 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 14 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 156 tok/s Pro
GPT OSS 120B 388 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Online Optimization and Ambiguity-based Learning of Distributionally Uncertain Dynamic Systems (2102.09111v2)

Published 18 Feb 2021 in eess.SY, cs.LG, cs.SY, math.DS, math.OC, stat.AP, and stat.ML

Abstract: This paper proposes a novel approach to construct data-driven online solutions to optimization problems (P) subject to a class of distributionally uncertain dynamical systems. The introduced framework allows for the simultaneous learning of distributional system uncertainty via a parameterized, control-dependent ambiguity set using a finite historical data set, and its use to make online decisions with probabilistic regret function bounds. Leveraging the merits of Machine Learning, the main technical approach relies on the theory of Distributional Robust Optimization (DRO), to hedge against uncertainty and provide less conservative results than standard Robust Optimization approaches. Starting from recent results that describe ambiguity sets via parameterized, and control-dependent empirical distributions as well as ambiguity radii, we first present a tractable reformulation of the corresponding optimization problem while maintaining the probabilistic guarantees. We then specialize these problems to the cases of 1) optimal one-stage control of distributionally uncertain nonlinear systems, and 2) resource allocation under distributional uncertainty. A novelty of this work is that it extends DRO to online optimization problems subject to a distributionally uncertain dynamical system constraint, handled via a control-dependent ambiguity set that leads to online-tractable optimization with probabilistic guarantees on regret bounds. Further, we introduce an online version of Nesterov's accelerated-gradient algorithm, and analyze its performance to solve this class of problems via dissipativity theory.

Citations (3)

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 1 like.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube