Non-monotone DR-Submodular Maximization
- Non-monotone DR-submodular maximization is defined for continuous functions exhibiting diminishing returns and non-monotonic behavior over convex sets.
- The paper establishes a novel non-monotone Frank-Wolfe algorithm achieving a (1/4)(1-m) approximation, proven optimal through tight complexity arguments.
- Empirical evaluations in revenue maximization, location summarization, and quadratic programming highlight the method’s efficiency and practical superiority.
Non-monotone DR-submodular maximization concerns the optimization of functions that generalize discrete submodularity (diminishing returns) to the continuous domain, encompassing non-monotonic behavior and non-down-closed convex constraints. This class unifies and extends classical set-function submodular maximization and covers a diversity of problems in machine learning, economics, and network optimization. The area is notable for a sequence of impossibility results, breakthroughs on tight polynomial-time approximability, and the interplay between constraint geometry and achievable guarantees (Mualem et al., 2022).
1. DR-Submodularity and Problem Formulation
Let be a continuously differentiable function, with feasible set convex (not necessarily down-closed). is called DR-submodular if, for all (coordinate-wise), every , and all with ,
Equivalently, the gradient is coordinate-wise anti-tone: when , and all mixed Hessians .
A function is non-monotone DR-submodular if the above holds but monotonicity ( everywhere) is not assumed. Maximization of such functions over convex sets is NP-hard even in simple cases (Mualem et al., 2022).
Illustrative Example. The function on is DR-submodular but non-monotone: is initially increasing in each , then decreasing for .
2. Approximability Barriers and the Minimum-Norm Parameter
A central negative result (Vondrák 2013) establishes that for non-monotone DR-submodular maximization over a general convex set , no algorithm running in sub-exponential time can achieve a constant-factor approximation in the worst case. The source of this hardness is the so-called symmetry-gap constructed by adversarially symmetrical feasible regions and objectives.
A key technique to bypass this barrier is to parameterize approximation in terms of the "minimum -norm"
When , the feasible set stays "interior," breaking full symmetry and permitting nontrivial bounds. Sub-exponential-time methods achieve approximation ratios that scale as $1 - m$, gracefully degrading as approaches the cube's boundary (Mualem et al., 2022).
3. Polynomial-time Algorithms: The Guarantee
Du (2022) discovered the first polynomial-time, information-theoretically optimal algorithm for non-monotone DR-submodular maximization over general convex constraints, achieving a guarantee of
where is the output after iterations for small (Mualem et al., 2022).
Algorithm—Non-monotone Frank-Wolfe:
- Start from .
- For :
- .
- .
Output the best .
Analysis: By DR-submodularity, the Frank-Wolfe direction ensures a margin on the directional derivative related to global optimum via . The iterative process contracts away from the boundary, ensuring the approximation factor dependently degrades as (when is almost fully boundary, e.g. a vertex).
This is provably information-theoretically sharp; no sub-exponential-time (let alone polynomial-time) algorithm can beat in worst case (Mualem et al., 2022).
4. Online Maximization and Regret: Matching Tight Ratios
For the online version (sequentially revealed DR-submodular objectives ), a matching -approximation is obtained with regret.
Algorithm—Non-monotone Meta-Frank-Wolfe:
- At each round , initialize at minimum norm in .
- Execute Frank-Wolfe steps with independent online linear-optimization subroutines .
- For each :
- Receive from .
- .
- Receive/estimate an unbiased and feed as the loss vector to .
- Play .
The expected average reward over rounds satisfies
This guarantee, both offline and online, is proven optimal (Mualem et al., 2022).
5. Information-theoretic Hardness
A symmetry-gap argument demonstrates that for any and , there is no sub-exponential-time algorithm that achieves
approximation for maximizing non-negative, -smooth DR-submodular over any polytope with . The construction involves adversarial, high-dimensional instances where distinguishing optimal from near-optimal regions is exponentially hard due to function symmetry.
This implies the factor achieved by Du (2022) and in the presented online method is not improvable short of exponential time, for general .
6. Extensions: Comparison to Other Settings and Interpolated Guarantees
The bound specializes as follows:
- For (e.g., , "fully down-closed"), the approximation is tight at $1/4$.
- For (e.g., shrinks to a singleton or a low-dimensional facet), the guarantee vanishes, as expected. This characterizes a smooth transition between the easy (down-closed) and hard (general) cases.
Intermediate approximation ratios $1/e$, $0.385$, and $1/3$ arise in the down-closed, box, or other special settings, addressed in the literature by continuous greedy, measured continuous greedy, double-greedy, and hybrid approaches (Chen et al., 2023, Bian et al., 2017, Niazadeh et al., 2018).
7. Empirical Performance across Applications
The Du (2022) and matching online algorithms were tested in several domains:
- Revenue Maximization (Social Networks): On datasets such as Facebook (64K nodes) and Advogato (6.5K nodes) with box+budget constraints, the method converges substantially faster and reaches higher rewards than competing algorithms (e.g., [Thắng & Srivastav 2021]).
- Location Summarization: For tasks on the Yelp Charlotte dataset, the method outperforms others in longitudinal objective improvement.
- Quadratic Programming with DR-negative-definite matrices: Varied (down-closed and non-down-closed) were used, and the polynomial-time Non-monotone Frank-Wolfe outperforms previous sub-exponential algorithms even in down-closed cases when all methods are run under the same time budget.
These results validate both the tightness and practical strength of the class for both offline and online settings (Mualem et al., 2022).
Summary Table: Offline Approximability by Constraint Type
| Constraint Type | Best Achievable Ratio | Achieved by | Complexity |
|---|---|---|---|
| Down-closed (e.g. box) | $1/e$ | [Bian et al.], [Dürr et al.] | poly-time |
| General, | Du (2022) offline; (Mualem et al., 2022) online | poly-time | |
| General, | (hard) | — (no c.a.r.) | — (hardness) |
8. Concluding Remarks
Non-monotone DR-submodular maximization over general convex sets is now fully characterized with respect to worst-case polynomial-time and sub-exponential-time approximability, with the bound being sharp. Algorithmic frameworks (Frank-Wolfe variants, online meta-FW) are efficient, general, and empirically dominant, making the area a canonical example of tight complexity-theoretic and practical trade-off in non-convex continuous optimization. Advances in constraint-specific interpolation (e.g., via convex body decomposition) and specialized oracles further expand the landscape, but the inapproximability barrier sets a final limit without additional structure (Mualem et al., 2022, Mualem et al., 17 Jan 2024).