Papers
Topics
Authors
Recent
Search
2000 character limit reached

Subset Weight Optimization Problem

Updated 9 January 2026
  • Subset Weight Optimization Problem is a combinatorial challenge that involves choosing a subset with weight and structure constraints to optimize a specific objective.
  • Multiple formulations, such as subset sum, knapsack, and best subset selection in regression, showcase the problem's NP-hard nature and diverse applications.
  • Advanced methods, including dynamic programming, meet-in-the-middle, and heuristic algorithms, provide practical solutions under various constraints and large-scale settings.

The Subset Weight Optimization Problem (SWOP) encompasses a broad class of combinatorial optimization problems where the task is to select a subset (often with cardinality, weight, or structure constraints) from a finite ground set to optimize a given objective. This problem arises in diverse settings, including classic subset sum, knapsack, best subset selection in regression, matroid optimization, and constrained variants in graphs or digraphs. SWOP is generally NP-hard, and extensive theory and applied methodology have evolved to address both exact and approximate solutions.

1. Problem Formulations and Variants

Fundamental forms of the Subset Weight Optimization Problem include:

  • Subset Sum / Knapsack: Given S={a1,...,an}NS = \{a_1, ..., a_n\} \subset \mathbb{N}, maximize aSa\sum_{a \in S'} a over subsets SSS' \subseteq S with aSat\sum_{a \in S'} a \le t (Koiliaris et al., 2018, Lilienthal, 2015).
  • Best Subset Selection (Regression): For XRn×pX \in \mathbb{R}^{n \times p}, yRny \in \mathbb{R}^n, find kk-sparse βRp\beta \in \mathbb{R}^p minimizing (1/2)yXβ22(1/2)\|y-X\beta\|_2^2 subject to β0k\|\beta\|_0 \leq k (Singh et al., 31 Mar 2025, Moka et al., 2022, Ren et al., 2024).
  • Fixed-Weight Subset Sum: Find a subset of \ell elements from nn weights summing to a target tt (Shallue, 2012).
  • Maximum Weighted Independent Set: Maximize total weight over independent sets in a graph, maxSV, indepvSω(v)\max_{S \subseteq V,\ \text{indep}} \sum_{v \in S} \omega(v) (Borowitz et al., 15 Oct 2025).
  • Optimization under additional constraints: Including those induced by digraphs (Gourvès et al., 2016), bipartite associations (Zinder et al., 2022), matroidal structure (Bérczi et al., 1 Jul 2025), or monotone set systems (Kobayashi et al., 2020).

In many cases a decision version ("is there a feasible set achieving at least/at most tt?") is a subcase of the optimization version.

2. Complexity and Theoretical Limits

SWOP is NP-hard across a wide range of formulations. Specific results:

  • Best Subset Selection is NP-hard; the feasible region has combinatorial size (pk)\binom{p}{k} (Singh et al., 31 Mar 2025).
  • Knapsack/Subset Sum: Classical dynamic programming is pseudo-polynomial in input weights; exponential-time methods are best possible unless P=NP (Koiliaris et al., 2018, Lilienthal, 2015).
  • MWIS (Maximum Weight Independent Set) and partitioning with associated subsets are strongly NP-hard, even for restricted instances (Borowitz et al., 15 Oct 2025, Zinder et al., 2022).
  • Constrained variants (e.g., with digraph constraints (Gourvès et al., 2016)) are NP-hard, with APX-hardness and tight inapproximability for some digraph classes.

Despite hardness, specific formulations admit pseudo-polynomial–time, FPTAS, or PTAS for various parameter regimes (weight bounds, treewidth, structure).

3. Algorithms and Methodologies

3.1 Exact and Pseudo-Polynomial Methods

  • Dynamic Programming: For SWOP on integer weights, classical O(nt)O(n t) and modern O~(nt)\tilde O(\sqrt{n} t) algorithms compute all achievable sums or reconstruct witness subsets, with space O(nt)O(\sqrt{n} t) (Koiliaris et al., 2018).
  • Meet-in-the-Middle: Reduces space to O(2n/2)O(2^{n/2}), time O(2n/2)O(2^{n/2}) for knapsack-like problems (Lilienthal, 2015).
  • Bipartite Synthesis Method (BSM): Achieves O(20.5n)O(2^{0.5n}) deterministic time for subset-sum/knapsack via interval partitioning, coefficient splits, and multi-scale pruning (Lilienthal, 2015).
  • Fixed-Weight, Randomized Birthday Algorithms: Use kk-set birthday techniques and splitting systems to achieve improved time-space tradeoffs for constrained subset-sum, especially important in cryptography (Shallue, 2012).
  • Specialized DP for Restricted Structures: In oriented trees or bounded-rank matroids, pseudo-polynomial/strongly polynomial algorithms are possible (Gourvès et al., 2016, Bérczi et al., 1 Jul 2025).

3.2 Suboptimal and Heuristic Algorithms

Best Subset Selection in High Dimensions

A variety of competitive suboptimal procedures are used, including (Singh et al., 31 Mar 2025):

  • Forward Selection (FS): Greedy, includes the most beneficial variable at each step; O(kpn)O(kpn)O(kpn2)O(kpn^2) time.
  • Sequential Forward Floating Selection (SFFS): Allows post-inclusion exclusion to overcome nesting effect; O(kp2n)O(kp^2 n).
  • Discrete First-Order Methods (DFO/DFOn): Iteratively project onto the space of kk-sparse vectors under quadratic majorization; extremely fast (O(np+plogp)O(np + p\log p) per iteration).
  • Genetic Algorithms (GA): Population-based, employing crossover/mutation and fitness selection.
  • Sequential Feature Swapping (SFS1 and SFS2): New in (Singh et al., 31 Mar 2025), iteratively swaps one or two features to greedily reduce RSS, terminating finitely with guaranteed descent, competitive balance of solution quality vs. time.
  • Primal-Dual Optimization: For 0\ell_0-regularized GLMs, primal-dual certificates, safe screening, and incremental active-set methods enable polynomial-time convergence and tight duality gaps (Ren et al., 2024).

Continuous relaxations (e.g., COMBSS) use differentiable surrogates over the simplex, solved via gradient descent, with subset discretizations recovered by thresholding or via full solution paths (Moka et al., 2022).

Large-Scale Graph Subset Optimization

  • Distributed Data-Reduction (MWIS): For graphs with 10910^9+ vertices, distributed local reductions (heavy vertex, neighborhood removal, folding, etc.) and distributed greedy/peeling heuristics retain near-optimality with massive speedups (Borowitz et al., 15 Oct 2025).

4. Constrained and Generalized Forms

4.1 Combinatorial and Structural Variants

Certain SWOPs are formulated with structural or combinatorial constraints:

  • Minimum-Weight Partitioning with Associated Subsets (MWPSAS): Partitioning a primary set NN with associated subsets MM per element. The objective is to minimize the maximal sum of weights of block elements plus their associated variables. Integer programming formulations and greedy-phase approximations with additive performance guarantees are established (Zinder et al., 2022).
  • Subset Sum/Knapsack with Digraph Constraints: Imposing closure constraints on subsets following digraph arcs (strong/weak), and optionally requiring maximality. NP-hardness, PTAS for DAGs, and pseudo-polynomial DPs for oriented trees are proved (Gourvès et al., 2016).
  • Subset-Constrained Inverse Matroid Optimization: Modify weights minimally (in \ell_\infty or integrally) so that a specified subset S0S_0 controls the set of optimal bases under structural constraints—solvable in strongly polynomial time via refined min-max theorems (Bérczi et al., 1 Jul 2025).
  • Monotone Property-Weighted Enumeration: For monotone set systems Π\Pi, efficient approximate enumeration (rather than single minimum) of all minimal subsets of weight at most kk. Supergraph-based enumeration achieves constant-factor approximations, polynomial delay, and output sensitivity (Kobayashi et al., 2020).

4.2 Ratio and Multi-Objective Optimization

  • Subset Sum Ratio (SSR): Partition II into disjoint X,YX, Y minimizing max{Σ(X)/Σ(Y),Σ(Y)/Σ(X)}\max\{\Sigma(X)/\Sigma(Y), \Sigma(Y)/\Sigma(X)\}. Recent work establishes an FPTAS with complexity O(n/ε0.9386)O(n/\varepsilon^{0.9386}), strictly faster in ε\varepsilon than classic subset sum, based on instance reduction and geometric search among truncated subsets (Bringmann, 2023).

5. Empirical and Theoretical Performance

Empirical studies provide guidance on regime-dependent algorithm performance:

  • Best Subset Selection: On synthetic high-dimensional regression, SFFS and FS perform best in overdetermined settings, SFS2 is best in underdetermined or heavily correlated regimes, and DFO is fastest but can fall into poor local minima with ill-conditioned XX or low SNR (Singh et al., 31 Mar 2025). Genetic algorithms are seldom competitive under tight CPU limits.
  • Distributed MWIS: Asynchronous reduction and "reduce-and-peel"/greedy heuristics scale with minor quality loss (<2%<2\%) up to billion-vertex instances, with >30×>30\times speedups over sequential baselines (Borowitz et al., 15 Oct 2025).
  • COMBSS: Gradient-based continuous surrogates for BSS achieve recovery rates above 90–100% for small kk, outperforming classical heuristics and matching or exceeding exact MIO for pnp\gg n within seconds (Moka et al., 2022).
  • Approximate Enumeration: Algorithms for minimal monotone subsets provide guarantees on completeness (enumerating all small solutions) and approximation factor, with explicit bounds in terms of schema and set family (Kobayashi et al., 2020).

6. Practical Recommendations

  • For moderate to large pp, SFS1 (sequential feature swapping with t=1t=1) provides an effective tradeoff of solution quality and runtime; SFS2 is worthwhile at increased cost if optimality is critical (Singh et al., 31 Mar 2025).
  • For SWOPs on massive graphs, distributed reduction followed by parallelized greedy/peeling is empirically robust and computationally scalable (Borowitz et al., 15 Oct 2025).
  • In highly structured or constrained variants (e.g., matroids, digraph-constrained sets, MWPSAS), leveraging available structures—such as exploitability of safe screening in regression, PTAS for bounded treewidth, and simplified min-max characterizations in matroids—is essential for tractability and accurate understanding of optimality bounds (Zinder et al., 2022, Bérczi et al., 1 Jul 2025, Gourvès et al., 2016).
  • For enumeration applications in monotone settings (e.g., vertex covers), output-sensitive enumeration guarantees with constant-factor relaxation provide a path to solution diversity and robustness (Kobayashi et al., 2020).

7. Extensions and Outlook

Open problems span improved approximation–enumeration tradeoffs in monotone systems, generalizations of inverse optimization in matroid/intersection settings, and domain-agnostic surrogate relaxations bridging combinatorial and continuous methods. The synthesis of structural decompositions (splitting systems, interval refinement, block coordinate frameworks) and modern machine learning heuristics remains a key direction for further advancement of the SWOP paradigm.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Subset Weight Optimization Problem.