Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 71 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 21 tok/s Pro
GPT-4o 81 tok/s Pro
Kimi K2 231 tok/s Pro
GPT OSS 120B 435 tok/s Pro
Claude Sonnet 4 33 tok/s Pro
2000 character limit reached

A Riemannian Alternating Descent Ascent Algorithmic Framework for Nonconvex-Linear Minimax Problems on Riemannian Manifolds (2409.19588v1)

Published 29 Sep 2024 in math.OC

Abstract: Recently, there has been growing interest in minimax problems on Riemannian manifolds due to their wide applications in machine learning and signal processing. Although many algorithms have been developed for minimax problems in the Euclidean setting, there are relatively few works studying minimax problems on manifolds. In this paper, we develop a flexible Riemannian alternating descent ascent (RADA) algorithmic framework for solving nonconvex-linear minimax problems on Riemannian manifolds. Within this framework, we propose two easy-to-implement yet efficient algorithms that alternately perform one or multiple projected/Riemannian gradient descent steps and a proximal gradient ascent step at each iteration. We show that the proposed RADA algorithmic framework can find both an $\varepsilon$-Riemannian-game-stationary point and an $\varepsilon$-Riemannian-optimization-stationary point of the considered problem within $\mathcal{O}(\varepsilon{-3})$ iterations, achieving the best-known iteration complexity. We also reveal intriguing similarities and differences between the algorithms developed within our proposed framework and existing algorithms, which provide important insights into why the former outperform the latter. Lastly, we report numerical results on sparse principal component analysis (PCA), fair PCA, and sparse spectral clustering to demonstrate the superior performance of the proposed algorithms.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.