Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
92 tokens/sec
Gemini 2.5 Pro Premium
51 tokens/sec
GPT-5 Medium
32 tokens/sec
GPT-5 High Premium
25 tokens/sec
GPT-4o
103 tokens/sec
DeepSeek R1 via Azure Premium
64 tokens/sec
GPT OSS 120B via Groq Premium
469 tokens/sec
Kimi K2 via Groq Premium
227 tokens/sec
2000 character limit reached

DR-Submodularity: Theory and Algorithms

Updated 12 August 2025
  • DR-submodularity is a property that extends the diminishing returns principle to multi-dimensional continuous and discrete domains, emphasizing coordinate-wise concavity.
  • It enables efficient greedy and double greedy algorithms with provable approximation guarantees across applications like budget allocation, sensor placement, and facility location.
  • Its rigorous analysis uncovers computational hardness in constrained settings, driving the development of novel approaches for challenging non-convex optimization problems.

DR-submodularity is a generalization of the classical diminishing returns property of discrete submodular set functions to more general domains, such as the integer lattice, distributive lattices, and continuous domains. A function is DR-submodular if its marginal gain from increasing a coordinate (or “adding” an element) decreases as the current input increases, formalizing the concept that early investments or selections provide larger incremental benefits than later ones. DR-submodularity underpins the structure of many practical optimization problems in machine learning, economics, network theory, and combinatorial optimization, enabling the design of polynomial-time algorithms with provable approximation guarantees in otherwise intractable non-convex or combinatorial settings.

1. Formal Definitions and Fundamental Properties

Traditional submodularity for set functions f:2NRf: 2^N \to \mathbb{R} is characterized by: f(S{e})f(S)f(T{e})f(T)f(S \cup \{e\}) - f(S) \geq f(T \cup \{e\}) - f(T) for all STNS \subseteq T \subseteq N and eTe \notin T. This is the diminishing returns (DR) property: marginal gains decrease as the set grows.

DR-submodularity extends this to multivariate and continuous domains. For functions f:DRf: \mathcal{D} \to \mathbb{R} where D\mathcal{D} is a product domain (often the integer lattice {0,,C}n\{0, \ldots, C\}^n, a distributive lattice, or [0,1]n[0,1]^n), the DR property requires that for all xyx \leq y (coordinate-wise) and every feasible coordinate ii and increment k0k \geq 0: f(x+kχi)f(x)f(y+kχi)f(y)f(x + k\chi_i) - f(x) \ge f(y + k\chi_i) - f(y) where χi\chi_i is the iith standard basis vector.

Key properties:

  • For set functions, submodularity and the DR property are equivalent.
  • For integer or continuous domains, submodularity (via the lattice inequality f(x)+f(y)f(xy)+f(xy)f(x) + f(y) \ge f(x \wedge y) + f(x \vee y)) does not imply DR-submodularity; additional coordinate-wise concavity is often required (Gottschalk et al., 2015, Bian et al., 2020).
  • In the continuous setting, if ff is twice differentiable, DR-submodularity is equivalent to all cross-partial derivatives off the diagonal being non-positive and diagonal terms non-positive (i.e., ff is coordinate-wise concave) (Bian et al., 2020).

2. Algorithmic Frameworks for DR-Submodular Maximization

Efficient algorithms for DR-submodular maximization typically build on generalizations of greedy or double greedy paradigms and exploit concavity along nonnegative directions.

  • Unconstrained Maximization:
    • For monotone DR-submodular functions over distributive lattices, a greedy approach yields a $1/2$-approximation (Gottschalk et al., 2015).
    • For set functions and integer lattices, double greedy frameworks achieve a $1/2$-approximation for DR-submodular objectives (randomized), and $1/3$ for general submodular (non-DR) functions (Gottschalk et al., 2015, Soma et al., 2016).
    • For continuous domains with box constraints, the DR-DoubleGreedy algorithm achieves a tight $1/2$-approximation in linear time (Bian et al., 2018).
  • Constrained Maximization:
    • For matroid or poset matroid constraints and monotone DR-submodular functions, greedy selection (augmenting with feasible elements maximizing marginal gain) attains $1/2$-approximation; for cardinality (uniform matroid), it reaches (11/e)(1 - 1/e) (Gottschalk et al., 2015).
    • Continuous greedy algorithms and Frank-Wolfe–style projection-free optimization provide (11/e)(1 - 1/e)-approximation for monotone DR-submodular functions under down-closed convex constraints (Bian et al., 2016, Gottschalk et al., 2015, Bian et al., 2017).
  • Non-monotone Settings:
    • For non-monotone DR-submodular maximization, double greedy–type algorithms attain 1/(2+ε)1/(2+\varepsilon)-approximation in strongly polynomial time (Soma et al., 2016), $1/4$ via a two-phase or discretized approach with convergence guarantees (Bian et al., 2017, Du et al., 2022).

3. Hardness and Complexity Landscape

While many tractable cases exist, particular constraints can render DR-submodular maximization intractable:

  • Knapsack Constraints: Knapsack constraints in general distributive lattice settings cause a dramatic hardness increase—no constant-factor approximation is achievable unless $3$-SAT can be solved in sub-exponential time (Gottschalk et al., 2015). The inapproximability bound flows from a reduction to dense subhypergraph problems.
  • General Continuous Nonconvexity: DR-submodular maximization is NP-hard in general, and the best possible polynomial-time approximation ratios under value-oracle access are $1 - 1/e$ for monotone and $1/2$ for non-monotone objectives unless RP = NP (Bian et al., 2020).
  • Recent Advances: The best-known solver for multilinear extension maximization subject to down-closed constraints achieves a $0.401$ approximation, nearly matching the inapproximability bound $0.478$ (Buchbinder et al., 2023).

4. Mathematical Formulations and Approximation Guarantees

Foundational inequalities:

Problem/domain Approximation Ratio Reference
Unconstrained integer lattice $1/3$ (general submod.) (Gottschalk et al., 2015)
Unconstrained DR-submodular $1/2$ (Gottschalk et al., 2015, Bian et al., 2018)
Monotone, card. constraint $1 - 1/e$ (Gottschalk et al., 2015, Bian et al., 2020)
Monotone, poset matroid $1/2$ (Gottschalk et al., 2015)
Non-monotone continuous $1/3$ (FW/DoubleGreedy) (Bian et al., 2016)
Non-monotone contin. (box) $1/2$ (DR-DoubleGreedy) (Bian et al., 2018)
Down-closed constraint (ML ext.) $0.401$ (Buchbinder et al., 2023)
Knapsack/distributive lattice no const. approx. (Gottschalk et al., 2015)

Representative formulas:

  • Lattice DR-submodularity:

f(x)+f(y)f(xy)+f(xy)f(x) + f(y) \geq f(x \wedge y) + f(x \vee y) f(x+χi)f(x)f(y+χi)f(y)f(x + \chi_i) - f(x) \geq f(y + \chi_i) - f(y) for xyx \leq y (integer lattice).

  • Continuous DR-submodularity (gradient characterization):

if(x)if(y)\nabla_i f(x) \geq \nabla_i f(y) for xyx \leq y.

  • Multilinear extension (randomization for sets):

F(x)=ESx[f(S)]F(x) = \mathbb{E}_{S \sim x}[f(S)] (where SS is a random set, including each element independently with probability xix_i).

F(1ae0tx(τ)dτ)et[F(1a)+i=11i![0,t]iF((1a)j=1ix(τj))dτ]F\left(1-\mathbf{a}\odot e^{-\int_0^t\mathbf{x}(\tau)d\tau}\right) \geq e^{-t}\left[F(1-\mathbf{a}) + \sum_{i=1}^\infty \frac{1}{i!}\int_{[0,t]^i}F\left((1-\mathbf{a})\oplus_{j=1}^i \mathbf{x}(\tau_j)\right) d\tau\right]

5. Applications in Optimization and Machine Learning

DR-submodular maximization arises in a wide variety of resource allocation, combinatorial, and statistical problems:

6. Advanced Algorithmic Developments and Open Problems

Innovations in DR-submodular optimization span multiple algorithmic fronts:

  • Continuous relaxation and rounding: Multilinear extension maximization with randomized rounding is central to approaching combinatorial constraints (Buchbinder et al., 2023).
  • Derivative-free and noisy optimization: Black-box methods such as LDGM yield robustness to non-differentiability and noise, matching gradient-based methods in approximation quality (Zhang et al., 2018).
  • Projection-free and bandit algorithms: Recent frameworks achieve first regret guarantees for stochastic DR-submodular maximization under bandit feedback, exploiting smoothing and momentum techniques (Pedramfar et al., 2023, Pedramfar et al., 27 Apr 2024).
  • Strong/curved DR-submodularity: When the objective enjoys strong concavity along nonnegative directions, fast algorithms with improved approximation and linear convergence can be realized (Sadeghi et al., 2021).
  • Oracle complexity: For general convex constraints, stochastic value oracle models require O(1/ε5)O(1/\varepsilon^5) calls for O(ε)O(\varepsilon)-approximation in the worst case (Pedramfar et al., 2023).

Key open questions remain:

  • Can the $0.401$ approximation for multilinear extension maximization under down-closed constraints be further improved, as the inapproximability barrier is $0.478$ (Buchbinder et al., 2023)?
  • Do adaptive or history-dependent strategies exploiting the new “history-aware” bounds enable further progress (Buchbinder et al., 2023)?
  • Under which settings (e.g., additional structure, dynamic/adversarial, composite constraints) can the known performance gaps be narrowed?

7. Impact and Broader Significance

DR-submodularity has fundamentally reshaped understanding of non-convex optimization in both discrete and continuous settings. The extension of the diminishing returns paradigm to richer domains has allowed for algorithmic advances in areas previously considered intractable:

  • DR-submodularity underlies algorithms that efficiently bridge the gap between combinatorial and convex optimization, leveraging multilinear relaxations and randomized rounding.
  • The clear separation between submodularity and DR-submodularity on lattices has elucidated sources of algorithmic hardness, indicating the necessity of the diminishing returns property for tractability especially outside the Boolean cube (Gottschalk et al., 2015).
  • Theoretical results on inapproximability, tight lower bounds, and oracle complexity expose intrinsic barriers and guide algorithm development.

Recent frameworks unify diverse settings—monotone/non-monotone, continuous/lattice, deterministic/stochastic, full-information/bandit/zero-order feedback—offering a comprehensive, modular toolbox for non-convex and non-monotone optimization with rigorous guarantees (Pedramfar et al., 2023, Pedramfar et al., 27 Apr 2024).

This synthesis reflects the depth and diversity of DR-submodular optimization, encompassing rigorous theoretical analysis, algorithmic innovation, and practical applications across machine learning, data science, and operations research.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (17)