DR-Submodular Maximization
- DR-submodular maximization is the optimization of functions that exhibit diminishing returns across continuous, integer-lattice, and algebraic domains, generalizing classical set submodularity.
- Algorithmic approaches such as Frank-Wolfe, projected gradient ascent, and greedy methods provide provable approximations under various convex, combinatorial, and lattice constraints.
- Applications span influence maximization, MAP inference, resource allocation, and energy management, demonstrating the technique’s versatility in machine learning and optimization.
A diminishing-returns (DR) submodular function generalizes classical submodularity from set functions to continuous, integer-lattice, and even lattice-theoretic domains. DR-submodular maximization is the task of optimizing such functions—often non-monotone—subject to combinatorial, convex, lattice, or algebraic constraints. This problem arises in a wide range of applications in machine learning, stochastic inference, resource allocation, and combinatorial optimization, due to the diminishing-returns structure of influence models, entropy relaxations, and probabilistic graphical models.
1. Foundations: Definitions and Mathematical Structure
A function (or more generally, defined on a product of intervals or integers) is DR-submodular if it exhibits a form of diminishing returns on each coordinate. Formally, for any (coordinatewise) and any , ,
provided the increments remain in the domain (Du et al., 2022, Niazadeh et al., 2018). If is differentiable, this is equivalent to the partial derivatives being coordinatewise non-increasing: and, if is twice differentiable, to all mixed second partials being nonpositive: This generalizes the diminishing-returns property from set functions to continuous domains (Bian et al., 2020). DR-submodular functions are necessarily (weakly) submodular in the lattice sense, but the converse does not hold unless an additional coordinatewise concavity condition holds (Niazadeh et al., 2018).
The notion applies to integer lattices as well, with the submodularity and DR properties defined analogously; this is crucial for problems where variables count repetitions or allocations (Gu et al., 2022, Soma et al., 2016).
2. Algorithms and Complexity for DR-Submodular Maximization
2.1. Monotone DR-Submodular Maximization
For the monotone maximization over a convex or down-closed set, several algorithmic paradigms are provably optimal.
- Frank-Wolfe type algorithms give a -approximation in polynomial time under down-closed convex constraints (Bian et al., 2016, Bian et al., 2020, Pedramfar et al., 2023). At each iteration, one solves a linear maximization over the constraint set, making them projection-free.
- Projected Gradient Ascent and derivative-free greedy methods also achieve approximations with or complexity, depending on access to gradients or value oracles (Zhang et al., 2018, Pedramfar et al., 2023).
- For strongly DR-submodular functions (i.e., functions concave along nonnegative directions with a quadratic modulus), accelerated convergence and improved ratios —where is the curvature—can be achieved (Sadeghi et al., 2021).
When the convex set is not down-closed or when constraints are more general, the best achievable polynomial-time ratio drops to $1/2$ for monotone objectives (Pedramfar et al., 2023).
2.2. Non-Monotone DR-Submodular Maximization
Non-monotone DR-submodular maximization is fundamentally harder; approximation ratios are strictly lower and depend sharply on the feasible set.
- For general convex (not necessarily down-closed) constraints, a $0.25$-approximation in sub-exponential time is achievable using a Frank–Wolfe style method with a non-constant step size and careful sequence analysis (Du et al., 2022). Prior sub-exponential time results achieved (Du et al., 2022, Dürr et al., 2019).
- For down-closed convex sets, polynomial-time algorithms achieve $1/e$ approximation (Bian et al., 2017, Bian et al., 2020, Dürr et al., 2019), paralleling the multilinear extension paradigm in set-submodular optimization.
- If maximizing over the unit box , optimal polynomial-time procedures achieve $1/2$ (Niazadeh et al., 2018), with bi-greedy, binary search, or double greedy schemes exploiting coordinatewise concavity.
- On the integer lattice (box constraints), double-greedy and accelerated double-greedy (e.g., binary-search) algorithms give $1/2$-approximation in time (Soma et al., 2016, Gu et al., 2022).
The inapproximability threshold is $1/2$ for box constraints and $0.478$ for general down-closed polytopes under standard complexity assumptions (Buchbinder et al., 2023, Bian et al., 2020).
3. Extensions: Lattices, Subspace Selection, and Beyond
DR-submodularity admits generalizations to algebraic lattices, enabling unified analysis for subspace selection, PCA, and dictionary learning. In this framework:
- Directional DR-submodularity on lattices captures diminishing returns with respect to lattice “atoms” and enables tight greedy and double-greedy approximations for subspace constraints (Nakashima et al., 2018).
- PCA and generalized PCA objectives are monotone bidirectional DR-submodular, explaining the optimality of greedy eigenvector selection.
- Sparse dictionary selection is downward DR-submodular with an additive coherence gap.
Greedy, density-based, and double-greedy algorithms retain provable approximation guarantees under lattice versions of height or knapsack constraints.
4. Applications in Machine Learning and Optimization
DR-submodular maximization models core mechanisms in several domains:
- Influence maximization: Assigning continuous resource levels to nodes in a social or communication network often produces DR-submodular objectives via multilinear or more complex extensions (Bian et al., 2020).
- MAP inference in determinantal point processes: The softmax log-determinant extension is DR-submodular (Niazadeh et al., 2018, Bian et al., 2020).
- Mean-field variational inference for probabilistic log-submodular models: The evidence lower bound (ELBO) is DR-submodular in the mean parameters; DR-DoubleGreedy achieves a $1/2$ approximation to mean-field inference (Bian et al., 2018).
- Submodular quadratic programming: Quadratic forms with off-diagonal are DR-submodular; maximization over convex polytopes falls into the framework (Niazadeh et al., 2018, Bian et al., 2020).
- Budget allocation, resource scheduling, energy management and other applications where marginal utility decreases with increased allocation across multiple units.
5. Hardness, Lower Bounds, and the State of the Art
The landscape of achievable approximation ratios is sharply determined by the convexity and monotonicity structure:
| Setting | Approx. Ratio (Poly Time) | Lower Bound / Inapproximability |
|---|---|---|
| Monotone, down-closed convex | $1-1/e$ | NP-hard for |
| Monotone, general convex | $1/2$ | NP-hard for |
| Non-monotone, down-closed convex | $1/e$ | NP-hard for |
| Non-monotone, box ([0,1]n) | $1/2$ | NP-hard for |
| General convex (non-mono) | 0.25 (sub-exp), $0.192$ (poly) | Hard for |
Recent breakthroughs have leveraged improved damage bounds on multilinear extensions, yielding a $0.401$-approximation for general down-closed polytopes, which closes part of the gap to the $0.478$ inapproximability barrier (Buchbinder et al., 2023).
6. Online, Decentralized, and Oracle-Efficient Extensions
- Online non-monotone DR-submodular maximization admits regret-minimizing algorithms with the same constants as offline settings but with sublinear regret, e.g., $1/e$ over down-closed domains, $1/2$ over , and for general convex sets (Thang et al., 2019, Dürr et al., 2019).
- Bandit and stochastic value-oracle settings: Recent unified Frank–Wolfe frameworks provide first regret guarantees under stochastic and bandit-feedback scenarios (Pedramfar et al., 2023).
- Derivative-free and robust optimization: For monotone continuous DR-submodular objectives, derivative-free greedy methods are robust to noise, retaining -approximation (Zhang et al., 2018).
- Decentralized optimization: Communication-efficient decentralized online DR-submodular maximization achieves -regret with only one gradient query and message per round, scaling to large networks (Zhang et al., 2022).
7. Open Problems and Research Directions
- Tightening the gap between inapproximability and algorithmic lower bounds, especially for the general non-monotone case.
- Extending lattice submodularity to encompass constraints beyond cardinality/height, e.g., general matroid-like structures (Nakashima et al., 2018, Maehara et al., 2019).
- Further reducing oracle complexity or per-iteration cost, particularly in high-dimensional, stochastic, or adversarial environments (Pedramfar et al., 2023, Du et al., 2022).
- Developing more refined structural bounds (e.g., history-dependent damage bounds) to push constants for constrained maximization (Buchbinder et al., 2023).
- Characterizing the relations and transformations between continuous DR-submodular maximization and set-function or integer-lattice optimization, particularly for relaxations, rounding, and sampling (Bian et al., 2020, Gu et al., 2022).
References
- (Du et al., 2022) An improved approximation algorithm for maximizing a DR-submodular function over a convex set
- (Niazadeh et al., 2018) Optimal Algorithms for Continuous Non-monotone Submodular and DR-Submodular Maximization
- (Buchbinder et al., 2023) Constrained Submodular Maximization via New Bounds for DR-Submodular Functions
- (Gu et al., 2022) Profit Maximization in Social Networks and Non-monotone DR-submodular Maximization
- (Bian et al., 2018) Optimal DR-Submodular Maximization and Applications to Provable Mean Field Inference
- (Bian et al., 2020) Continuous Submodular Function Maximization
- (Bian et al., 2017) Continuous DR-submodular Maximization: Structure and Algorithms
- (Sadeghi et al., 2021) Fast First-Order Methods for Monotone Strongly DR-Submodular Maximization
- (Bian et al., 2016) Guaranteed Non-convex Optimization: Submodular Maximization over Continuous Domains
- (Nakashima et al., 2018) Subspace Selection via DR-Submodular Maximization on Lattices
- (Dürr et al., 2019) Non-monotone DR-submodular Maximization: Approximation and Regret Guarantees
- (Pedramfar et al., 2023) A Unified Approach for Maximizing Continuous DR-submodular Functions
- (Zhang et al., 2018) Maximizing Monotone DR-submodular Continuous Functions by Derivative-free Optimization
- (Zhang et al., 2022) Communication-Efficient Decentralized Online Continuous DR-Submodular Maximization
- (Thang et al., 2019) Online Non-Monotone DR-submodular Maximization
- (Soma et al., 2016) Non-monotone DR-Submodular Function Maximization
- (Schiabel et al., 2021) Randomized Algorithms for Monotone Submodular Function Maximization on the Integer Lattice
- (Maehara et al., 2019) Multiple Knapsack-Constrained Monotone DR-Submodular Maximization on Distributive Lattice --- Continuous Greedy Algorithm on Median Complex ---