Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 81 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 81 tok/s Pro
Kimi K2 172 tok/s Pro
GPT OSS 120B 434 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Inexact Alternating Minimization Algorithm for Distributed Optimization with an Application to Distributed MPC (1608.00413v1)

Published 1 Aug 2016 in math.OC

Abstract: In this paper, we propose the inexact alternating minimization algorithm (inexact AMA), which allows inexact iterations in the algorithm, and its accelerated variant, called the inexact fast alternating minimization algorithm (inexact FAMA). We show that inexact AMA and inexact FAMA are equivalent to the inexact proximal-gradient method and its accelerated variant applied to the dual problem. Based on this equivalence, we derive complexity upper-bounds on the number of iterations for the inexact algorithms. We apply inexact AMA and inexact FAMA to distributed optimization problems, with an emphasis on distributed MPC applications, and show the convergence properties for this special case. By employing the complexity upper-bounds on the number of iterations, we provide sufficient conditions on the inexact iterations for the convergence of the algorithms. We further study the special case of quadratic local objectives in the distributed optimization problems, which is a standard form in distributed MPC. For this special case, we allow local computational errors at each iteration. By exploiting a warm-starting strategy and the sufficient conditions on the errors for convergence, we propose an approach to certify the number of iterations for solving local problems, which guarantees that the local computational errors satisfy the sufficient conditions and the inexact distributed optimization algorithm converges to the optimal solution.

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.