Papers
Topics
Authors
Recent
2000 character limit reached

DR-Submodular Maximization

Updated 12 December 2025
  • DR-submodular maximization is the optimization of functions that exhibit diminishing returns across continuous, integer-lattice, and algebraic domains, generalizing classical set submodularity.
  • Algorithmic approaches such as Frank-Wolfe, projected gradient ascent, and greedy methods provide provable approximations under various convex, combinatorial, and lattice constraints.
  • Applications span influence maximization, MAP inference, resource allocation, and energy management, demonstrating the technique’s versatility in machine learning and optimization.

A diminishing-returns (DR) submodular function generalizes classical submodularity from set functions to continuous, integer-lattice, and even lattice-theoretic domains. DR-submodular maximization is the task of optimizing such functions—often non-monotone—subject to combinatorial, convex, lattice, or algebraic constraints. This problem arises in a wide range of applications in machine learning, stochastic inference, resource allocation, and combinatorial optimization, due to the diminishing-returns structure of influence models, entropy relaxations, and probabilistic graphical models.

1. Foundations: Definitions and Mathematical Structure

A function F:[0,1]nR+F:[0,1]^n \rightarrow \mathbb{R}_+ (or more generally, defined on a product of intervals or integers) is DR-submodular if it exhibits a form of diminishing returns on each coordinate. Formally, for any xyx\leq y (coordinatewise) and any i[n]i\in[n], δ0\delta\ge 0,

F(x+δei)F(x)F(y+δei)F(y),F(x+\delta e_i) - F(x) \ge F(y+\delta e_i) - F(y),

provided the increments remain in the domain (Du et al., 2022, Niazadeh et al., 2018). If FF is differentiable, this is equivalent to the partial derivatives being coordinatewise non-increasing: xy    F(x)F(y),x \ge y \implies \nabla F(x) \le \nabla F(y), and, if FF is twice differentiable, to all mixed second partials being nonpositive: 2F(x)xixj0   i,j.\frac{\partial^2 F(x)}{\partial x_i \partial x_j} \le 0 ~~~ \forall i, j. This generalizes the diminishing-returns property from set functions to continuous domains (Bian et al., 2020). DR-submodular functions are necessarily (weakly) submodular in the lattice sense, but the converse does not hold unless an additional coordinatewise concavity condition holds (Niazadeh et al., 2018).

The notion applies to integer lattices as well, with the submodularity and DR properties defined analogously; this is crucial for problems where variables count repetitions or allocations (Gu et al., 2022, Soma et al., 2016).

2. Algorithms and Complexity for DR-Submodular Maximization

2.1. Monotone DR-Submodular Maximization

For the monotone maximization over a convex or down-closed set, several algorithmic paradigms are provably optimal.

  • Frank-Wolfe type algorithms give a (11/e)(1 - 1/e)-approximation in polynomial time under down-closed convex constraints (Bian et al., 2016, Bian et al., 2020, Pedramfar et al., 2023). At each iteration, one solves a linear maximization over the constraint set, making them projection-free.
  • Projected Gradient Ascent and derivative-free greedy methods also achieve (11/e)ε(1-1/e) - \varepsilon approximations with O(1/ε)O(1/\varepsilon) or O(1/ε3)O(1/\varepsilon^3) complexity, depending on access to gradients or value oracles (Zhang et al., 2018, Pedramfar et al., 2023).
  • For strongly DR-submodular functions (i.e., functions concave along nonnegative directions with a quadratic modulus), accelerated convergence and improved ratios (1cf/e)(1-c_f/e)—where cfc_f is the curvature—can be achieved (Sadeghi et al., 2021).

When the convex set is not down-closed or when constraints are more general, the best achievable polynomial-time ratio drops to $1/2$ for monotone objectives (Pedramfar et al., 2023).

2.2. Non-Monotone DR-Submodular Maximization

Non-monotone DR-submodular maximization is fundamentally harder; approximation ratios are strictly lower and depend sharply on the feasible set.

  • For general convex (not necessarily down-closed) constraints, a $0.25$-approximation in sub-exponential time is achievable using a Frank–Wolfe style method with a non-constant step size and careful sequence analysis (Du et al., 2022). Prior sub-exponential time results achieved 1/(33)0.1921/(3\sqrt3) \approx 0.192 (Du et al., 2022, Dürr et al., 2019).
  • For down-closed convex sets, polynomial-time algorithms achieve $1/e$ approximation (Bian et al., 2017, Bian et al., 2020, Dürr et al., 2019), paralleling the multilinear extension paradigm in set-submodular optimization.
  • If maximizing over the unit box [0,1]n[0,1]^n, optimal polynomial-time procedures achieve $1/2$ (Niazadeh et al., 2018), with bi-greedy, binary search, or double greedy schemes exploiting coordinatewise concavity.
  • On the integer lattice (box constraints), double-greedy and accelerated double-greedy (e.g., binary-search) algorithms give $1/2$-approximation in O(nlogB)O(n\log B) time (Soma et al., 2016, Gu et al., 2022).

The inapproximability threshold is $1/2$ for box constraints and $0.478$ for general down-closed polytopes under standard complexity assumptions (Buchbinder et al., 2023, Bian et al., 2020).

3. Extensions: Lattices, Subspace Selection, and Beyond

DR-submodularity admits generalizations to algebraic lattices, enabling unified analysis for subspace selection, PCA, and dictionary learning. In this framework:

  • Directional DR-submodularity on lattices captures diminishing returns with respect to lattice “atoms” and enables tight greedy and double-greedy approximations for subspace constraints (Nakashima et al., 2018).
    • PCA and generalized PCA objectives are monotone bidirectional DR-submodular, explaining the optimality of greedy eigenvector selection.
    • Sparse dictionary selection is downward DR-submodular with an additive coherence gap.

Greedy, density-based, and double-greedy algorithms retain provable approximation guarantees under lattice versions of height or knapsack constraints.

4. Applications in Machine Learning and Optimization

DR-submodular maximization models core mechanisms in several domains:

  • Influence maximization: Assigning continuous resource levels to nodes in a social or communication network often produces DR-submodular objectives via multilinear or more complex extensions (Bian et al., 2020).
  • MAP inference in determinantal point processes: The softmax log-determinant extension is DR-submodular (Niazadeh et al., 2018, Bian et al., 2020).
  • Mean-field variational inference for probabilistic log-submodular models: The evidence lower bound (ELBO) is DR-submodular in the mean parameters; DR-DoubleGreedy achieves a $1/2$ approximation to mean-field inference (Bian et al., 2018).
  • Submodular quadratic programming: Quadratic forms f(x)=xTHx+hTxf(x) = x^T H x + h^T x with off-diagonal Hij0H_{ij}\le 0 are DR-submodular; maximization over convex polytopes falls into the framework (Niazadeh et al., 2018, Bian et al., 2020).
  • Budget allocation, resource scheduling, energy management and other applications where marginal utility decreases with increased allocation across multiple units.

5. Hardness, Lower Bounds, and the State of the Art

The landscape of achievable approximation ratios is sharply determined by the convexity and monotonicity structure:

Setting Approx. Ratio (Poly Time) Lower Bound / Inapproximability
Monotone, down-closed convex $1-1/e$ NP-hard for >11/e>1-1/e
Monotone, general convex $1/2$ NP-hard for >1/2>1/2
Non-monotone, down-closed convex $1/e$ NP-hard for >1/e>1/e
Non-monotone, box ([0,1]n) $1/2$ NP-hard for >1/2>1/2
General convex (non-mono) 0.25 (sub-exp), $0.192$ (poly) Hard for >0.478>0.478

Recent breakthroughs have leveraged improved damage bounds on multilinear extensions, yielding a $0.401$-approximation for general down-closed polytopes, which closes part of the gap to the $0.478$ inapproximability barrier (Buchbinder et al., 2023).

6. Online, Decentralized, and Oracle-Efficient Extensions

  • Online non-monotone DR-submodular maximization admits regret-minimizing algorithms with the same constants as offline settings but with sublinear regret, e.g., $1/e$ over down-closed domains, $1/2$ over [0,1]n[0,1]^n, and (11/3)/(3)(1-1/\sqrt{3})/(3) for general convex sets (Thang et al., 2019, Dürr et al., 2019).
  • Bandit and stochastic value-oracle settings: Recent unified Frank–Wolfe frameworks provide first regret guarantees under stochastic and bandit-feedback scenarios (Pedramfar et al., 2023).
  • Derivative-free and robust optimization: For monotone continuous DR-submodular objectives, derivative-free greedy methods are robust to noise, retaining (1eβ)(1-e^{-\beta})-approximation (Zhang et al., 2018).
  • Decentralized optimization: Communication-efficient decentralized online DR-submodular maximization achieves (11/e)(1-1/e)-regret O(T)O(\sqrt{T}) with only one gradient query and message per round, scaling to large networks (Zhang et al., 2022).

7. Open Problems and Research Directions

  • Tightening the gap between inapproximability and algorithmic lower bounds, especially for the general non-monotone case.
  • Extending lattice submodularity to encompass constraints beyond cardinality/height, e.g., general matroid-like structures (Nakashima et al., 2018, Maehara et al., 2019).
  • Further reducing oracle complexity or per-iteration cost, particularly in high-dimensional, stochastic, or adversarial environments (Pedramfar et al., 2023, Du et al., 2022).
  • Developing more refined structural bounds (e.g., history-dependent damage bounds) to push constants for constrained maximization (Buchbinder et al., 2023).
  • Characterizing the relations and transformations between continuous DR-submodular maximization and set-function or integer-lattice optimization, particularly for relaxations, rounding, and sampling (Bian et al., 2020, Gu et al., 2022).

References

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to DR-submodular Maximization.