Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 175 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 180 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Max-Min-Max Submodular Optimization

Updated 9 November 2025
  • Max-min-max submodular optimization is a framework for selecting a subset that maximizes the worst-case value across multiple monotone submodular functions under a budget constraint.
  • The approach employs innovative discrete algorithms that iteratively use greedy selection, linear programming, and Monte Carlo evaluations to achieve scalable, near-optimal results.
  • Applications span robust experimental design, fair influence maximization, and novel fair centrality maximization, ensuring balanced performance across diverse objectives.

Max-min-max submodular optimization, often referred to in the literature as multiobjective submodular maximization under a cardinality constraint, considers selecting a subset of elements from a finite ground set so as to simultaneously maximize the worst-case value across multiple monotone submodular objective functions. Formally, given submodular functions fc:2VR0f_c: 2^V \to \mathbb{R}_{\ge 0} indexed by cCc\in C and a budget BB, the problem is to find SV,SBS\subseteq V, |S|\leq B such that mincCfc(S)\min_{c\in C} f_c(S) is maximized. This formulation is central to robust combinatorial optimization, encompassing applications in fair influence maximization, robust experimental design, and (as newly introduced) fair centrality maximization, where ensuring good performance under each objective is essential.

1. Formal Problem Definition and Representative Applications

Let VV be a finite ground set of nn elements, and let CC be an index set of size kk. For each cCc \in C, fc:2VR0f_c: 2^{V} \to \mathbb{R}_{\ge 0} is a monotone submodular function: for all STVS \subseteq T \subseteq V and vVTv \in V\setminus T, fc(S)fc(T)f_c(S) \leq f_c(T) and fc(S{v})fc(S)fc(T{v})fc(T)f_c(S \cup \{v\}) - f_c(S) \geq f_c(T \cup \{v\}) - f_c(T). The objective is: maxSV,SBmincCfc(S)\max_{S \subseteq V,\, |S| \le B} \min_{c \in C} f_c(S) This framework arises in:

  • Robust experimental design: Simultaneously maximizing a family fθ(S)f_\theta(S) over uncertain parameters θ\theta.
  • Fair influence maximization: Each color cc denotes a demographic group, with fc(S)f_c(S) measuring expected influence spread in group cc.
  • Fair centrality maximization: The new application introduced, optimizing groupwise harmonic centrality in graphs after adding up to BB edges.

2. Continuous Relaxation, Multilinear Extension, and Practical Limitations

Theoretical approaches to this problem have explored continuous relaxations via the multilinear extension. For x[0,1]Vx \in [0,1]^V, define R(x)R(x) as a random subset of VV containing jj independently with probability xjx_j; then the multilinear extension is Fc(x)=E[fc(R(x))]F_c(x) = \mathbb{E}[f_c(R(x))]. The relaxed problem is: maxx[0,1]V,jxjBmincCFc(x)\max_{x \in [0,1]^V,\, \sum_j x_j \le B} \min_{c \in C} F_c(x) However, exactly evaluating Fc(x)F_c(x) involves summing over 2n2^n sets and is thus intractable. Practical approaches rely on Monte Carlo estimation or continuous-greedy methods (e.g., Frank–Wolfe), but these require repeated estimation of Fc(x)F_c(x) and its gradients, leading to significant computational overhead especially as nn and kk increase.

3. Discrete (Greedy-Style) Asymptotically Optimal Algorithm

A new scalable, discrete algorithm attains a (11/eε)(1-1/e-\varepsilon) approximation with high probability, avoiding the multilinear extension and relying solely on standard submodular oracle calls. The method constructs SS iteratively via BB rounds, where in each round it solves a linear program (LP) over the simplex to select an element to add:

Algorithm (sketch):

  1. Run r=log(2/δ)r = \lceil \log(2/\delta) \rceil independent trials.
  2. For each trial: a. Initialize SS \leftarrow \emptyset. b. For i=1i = 1 to BB, i. Let Sprev=SS_{prev} = S. ii. Solve the LP over xΔVx \in \Delta_V and ξR\xi \in \mathbb{R}: maximize ξ\xi subject to vxv[Bfc(vSprev)+fc(Sprev)]ξ\sum_v x_v [B f_c(v|S_{prev}) + f_c(S_{prev})] \ge \xi for every cCc \in C, vxv=1\sum_v x_v = 1, xv0x_v \ge 0. iii. Sample vxv \sim x and add vv to SS. c. If mincfc(S)>mincfc(Sbest)\min_c f_c(S) > \min_c f_c(S_{best}), update SbestSS_{best} \leftarrow S.
  3. Return SbestS_{best}.

Key performance results are as follows:

  • In expectation over the random process, for each cCc\in C, E[fc(S)](11/e)OPT\mathbb{E}[f_c(S)] \ge (1-1/e)\mathrm{OPT}.
  • With high probability (martingale concentration argument, Theorem 6), fc(S)(11/eε)OPTf_c(S)\geq (1-1/e-\varepsilon)\mathrm{OPT} for all cc, provided OPTMlogk/ε2\mathrm{OPT} \gg M\log k/\varepsilon^2, where M=maxcmaxvfc(v)M = \max_c \max_v f_c(v|\emptyset).

The algorithm relies only on computing fc(S)f_c(S) and fc(vS)f_c(v|S) via submodular oracles. The LP can be efficiently approximated via a multiplicative-weights (MWU) subroutine and lazy evaluations in O(nk)O(nk) oracle calls per outer iteration.

4. Algorithmic Rounding and Ensuring Integral Solutions

Since the main greedy step maintains SS as an integral set at all times, explicit rounding is unnecessary. To remove the technical requirement OPTMlogk/ε2\mathrm{OPT} \gg M \log k/\varepsilon^2, a preprocessing phase identifies and includes up to BB' elements of highest marginal gain (across colors), forming a set TT. Modified objectives f~c(A)=fc(AT)\tilde f_c(A) = f_c(A \cup T) are constructed, now with all singleton marginals OPT/B\leq \mathrm{OPT}/B'. A continuous relaxation is then run on budget BTB-|T|, yielding a fractional solution that is rounded via "swap rounding" to an integral set S~\tilde S. Lemma 9 and swap rounding analysis ensure a final (11/eO(ε))(1-1/e-O(\varepsilon)) approximation guarantee for all cc.

5. Computational Complexity and Scalability Features

The algorithm achieves practical scalability under the following resource bounds:

  • Submodular oracle calls: O(nBklog(1/δ))O(nBk \log(1/\delta)).
  • Total running time: O(nB3(k/ε2)logklog(1/δ))O(nB^3(k/\varepsilon^2)\log k\,\log(1/\delta)).

Crucial speed-ups include:

  • The MWU approach to LP solving with O(B2(M2/ε2logk))O(B^2 (M^2/\varepsilon^2 \log k)) rounds and each round using lazy marginal gain bounding.
  • Preprocessing to reduce the impact of large-gain elements and further control the BB dependence.
  • Lazy evaluations of marginal gains (maintaining upper bounds gc(v)fc(vS)g_c(v)\geq f_c(v|S)).

Empirically, the LP can be solved via standard solvers (e.g., Gurobi) or by MWU. The introduction of a "tilt" parameter φ>1\varphi > 1 in the LP objective biases the allocation toward colors currently yielding the minimum value, which improves practical convergence.

6. Applications: Fair Centrality Maximization

A significant new application is groupwise harmonic centrality in networks. For a node vv in a directed graph G=(V,A)G=(V,A), classical harmonic centrality is hG(v)=uv1/dG(u,v)h_G(v) = \sum_{u \neq v} 1/d_G(u,v); adding an edge to vv increases this quantity submodularly. The fair variant seeks to maximize

hGmin(v)=mincC[1Vc{v}uVc{v}1dG(u,v)]h^\mathrm{min}_G(v) = \min_{c\in C} \left[ \frac{1}{|V_c\setminus\{v\}|} \sum_{u\in V_c\setminus\{v\}} \frac{1}{d_G(u,v)} \right]

Selecting up to BB edges FF to add to GG defines fc(F)f_c(F) as the post-edit, groupwise normalized harmonic sum. Each fcf_c is nonnegative monotone submodular, and the resulting task is maxFBmincfc(F)\max_{|F|\leq B} \min_c f_c(F). Standard continuous methods fail to scale to graphs with tens of thousands of nodes, whereas the new discrete method retains its theoretical and practical guarantees in this regime.

7. Empirical Performance and Comparative Analysis

Experiments were performed on:

  • Max-kk-cover instances (k=20k=20, n=64n=64) from stochastic Kronecker, Barabási–Albert, and Erdős–Rényi models.
  • 20 Amazon co-purchase networks (up to n104n \approx 10^4 nodes) for fair centrality with k=2k=2.
  • Simulated Antelope Valley social networks (n=500n=500, kk up to 13) for fair influence maximization.

Compared algorithms included the LP Greedy method (with both Gurobi and MWU linear program solving, plus lazy updates), round-robin greedy (Udwani-style), Saturate (bi-criteria method), Udwani’s MWU ((1-1/e)2^2-approximation), and Frank–Wolfe continuous method (for influence). Major findings:

  • LP Greedy achieves the highest min-cover on max-kk-cover, with 10×\approx 10\times fewer oracle calls than MWU.
  • On fair centrality, LP Greedy outperforms Saturate and MWU both in objective and running time, solving up to 10,000-node graphs in minutes.
  • For fair influence, LP Greedy matches or outperforms Frank–Wolfe for nontrivial budgets and is more broadly applicable.
  • Ablation studies reveal that 20 outer repetitions suffice and a tilt factor φ10\varphi \approx 10 optimizes practical performance.

Overall, this algorithm bridges the prior theoretical-practical gap, attaining the asymptotically optimal (11/eε)(1-1/e-\varepsilon) approximation via efficient, scalable discrete algorithms deployable on large-scale real-world tasks in fair optimization of submodular objectives.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Max-Min-Max Submodular Optimization.