Regularized Dual ADMM for Structured Convex Optimization
- Regularized Dual ADMM is a method that extends traditional ADMM by integrating explicit regularization to handle composite convex optimization problems with structured constraints.
- It employs dual splitting using Bregman divergence, achieving robust convergence guarantees, sparsity, and efficient parallel updates in large-scale settings.
- Applications span sparse modeling, structured learning, and motion planning, demonstrating practical speedups and enhanced performance in nonconvex reformulations.
Regularized Dual Alternating Direction Method of Multipliers (RDA)—often referred to in research contexts as Regularized Dual ADMM or, in Bregman generalizations, Bregman ADMM—addresses composite convex optimization and constrained problems by integrating explicit regularization structures into ADMM-type primal-dual splitting schemes. RDA arises both in online convex optimization settings (dual averaging, composite mirror descent, FTRL families) and in large-scale structured problems where dual decompositions and parallelization are essential. It supports rigorous theoretical guarantees and practical efficiency in sparse modeling, structured regularization, distributed computation, and nonconvex reformulations.
1. Mathematical Formulation and Core Principles
RDA accommodates convex minimization objectives subject to linear or structured constraints, with composite regularization terms. Consider the linearly-constrained convex program:
where and are proper, closed convex functions, and encode the problem structure. An explicit regularization term (e.g., penalty, group structure, KL-divergence) is handled by composite objective definitions.
RDA operates primarily via splitting methods applied to the Fenchel dual, with dual variables representing constraint multipliers. Bregman generalization introduces a Legendre function to define the Bregman divergence , with quadratic yielding Euclidean geometry as a special case (Ma et al., 10 Sep 2025).
2. Algorithmic Frameworks: Primal–Dual Updates and Regularization
The base RDA update, in primal form for a convex domain , is:
where is the accumulation of subgradients from previous loss functions, is the explicit penalty (often ), and is a quadratic or Bregman regularizer (McMahan, 2010).
Generalized to two-block splitting with Bregman regularization, iterates update via:
The parameter serves as the regularization penalty, with its decay rate impacting convergence properties (Ma et al., 10 Sep 2025).
3. Sparsity, Composite Objectives, and Closed-Form Solutions
For composite regularizers (notably ), RDA provides coordinate-wise closed-form updates:
This exact handling of cumulative regularization results in substantially more sparsity compared to local-linearized approaches (e.g., FOBOS/composite mirror descent), which only penalize via subgradient approximations from the current round. RDA's global inclusion of enforces precise soft-thresholding, driving more coordinates to zero (McMahan, 2010).
Complex structured regularizers (e.g., overlapped group lasso, fused lasso, trace-norm) are managed via a splitting matrix and transformation into prox-friendly domains, allowing efficient updates for elaborate penalty types (1311.0622).
4. Convergence Analysis, Regret Bounds, and Assumptions
RDA exhibits favorable convergence and regret properties. For online convex optimization, with bounded subgradients and domain diameter , using quadratic rates yields:
For composite objectives, the logarithmic term persists; for pure regularization (no composite term), the bound reduces to (McMahan, 2010).
With Bregman regularization, convergence analysis assumes:
- Strong convexity of
- Lipschitz continuity of
- Bounded subgradients
- Nonincreasing sequence
and delivers an sublinear rate in primal objective gap:
Special cases (quadratic , entropy ) recover classical ADMM, variable-metric ADMM, and exponential-multiplier schemes respectively (Ma et al., 10 Sep 2025).
5. Stochastic and Parallelized RDA Variants
Stochastic Dual Coordinate Ascent with ADMM (SDCA–ADMM) leverages RDA principles in settings with massive data and complex regularization. It partitions variables into sub-batches, applies proximal updates per batch, and achieves linear (exponential) convergence rates in composite Lyapunov metrics under mild strong-convexity/smoothness assumptions. Sub-batching offers 2–3× acceleration over pure batch methods, with memory usage scaling as (1311.0622).
In nonconvex and bi-convex problems—particularly in MPC-based motion planning—RDA enables decomposition into primal and obstacle-specific dual blocks, so that all M collision constraints are solved in parallel per MPC step. This parallel structure renders the overall method highly scalable; practical implementations demonstrate near-constant computation time as the number of obstacles increases (Han et al., 2022).
6. Applications: Structured Learning, Optimal Transport, and Motion Planning
RDA finds applications in high-dimensional sparse regression, group lasso, graph-guided fused lasso, trace-norm penalization, and optimal transport (1311.0622, Ma et al., 10 Sep 2025). The Bregman ADMM and exponential-multiplier specializations support entropy-regularized transport and other compositional domains.
In autonomous navigation, RDA delivers accelerated collision-free motion planning by reformulating nonconvex MPC constraints as smooth bi-convex programs amenable to dual splitting and parallel computation per obstacle. The practical impact includes:
- Real-time planning with Ackermann kinematics and non-point-mass shapes
- Adaptive clearance margins using dynamic safety distance vectors with regularization
- Empirical speedups (2–3× over interior-point benchmarks), increased robustness (95% success rate versus 80% for TEB), and reduced solution times (Han et al., 2022).
7. Unifying View: FTRL, Mirror Descent, and ADMM
RDA, FTRL-Proximal, and composite-objective mirror descent (COMID, FOBOS) share a common template:
where choices of stabilization (quadratic, Bregman), regularizer accumulation (global vs. local), and subgradient evaluation (implicit vs. explicit) determine the specific algorithmic instance (McMahan, 2010). Mirror descent arises with Bregman sum regularization and linearized loss, further illustrating the breadth of the RDA framework in first-order online and batch settings.
In summary, Regularized Dual Alternating Direction Method of Multipliers constitutes a versatile and theoretically rigorous architecture for solving high-dimensional, structured, and constrained optimization problems with explicit regularization. Its adaptability to various splitting schemes, regularizer forms, and parallelized or stochastic updates makes it central in modern algorithmic convex optimization and motion planning.