Papers
Topics
Authors
Recent
Search
2000 character limit reached

A Particle Filter based Multi-Objective Optimization Algorithm: PFOPS

Published 28 Aug 2018 in stat.ML, cs.AI, and cs.LG | (1808.09446v4)

Abstract: This paper is concerned with a recently developed paradigm for population-based optimization, termed particle filter optimization (PFO). This paradigm is attractive in terms of coherence in theory and easiness in mathematical analysis and interpretation. Current PFO algorithms only work for single-objective optimization cases, while many real-life problems involve multiple objectives to be optimized simultaneously. To this end, we make an effort to extend the scope of application of the PFO paradigm to multi-objective optimization (MOO) cases. An idea called path sampling is adopted within the PFO scheme to balance the different objectives to be optimized. The resulting algorithm is thus termed PFO with Path Sampling (PFOPS). The validity of the presented algorithm is assessed based on three benchmark MOO experiments, in which the shapes of the Pareto fronts are convex, concave and discontinuous, respectively.

Summary

  • The paper presents PFOPS, a Bayesian particle filter-based algorithm that extends sequential Monte Carlo methods to multi-objective optimization with provable convergence guarantees.
  • It employs path sampling and importance resampling with componentwise Metropolis moves to explore trade-offs and construct proxy target probability density functions.
  • Empirical evaluations on convex, concave, and discontinuous benchmarks demonstrate that PFOPS accurately captures Pareto fronts and outperforms NSGA-II under limited sampling.

Particle Filter-Based Multi-Objective Optimization: PFOPS

Introduction and Problem Formulation

This work introduces PFOPS, a particle filter (PF)-based algorithm for multi-objective optimization (MOO). Unlike traditional meta-heuristic evolutionary computation (EC) algorithms that dominate MOO, PFOPS is grounded in Bayesian statistical inference and sequential Monte Carlo (SMC) methods. The motivation is the theoretical coherence of PFO schemes, which offer provable convergence properties lacking in most EC-based approaches. Prior applications of PFO are limited to single-objective optimization (SOO); this work extends the paradigm to MOO, leveraging path sampling to build proxy target probability density functions (pdfs) that encode varying trade-offs among conflicting objectives.

In MOO, given MM objectives f1,,fMf_1, \ldots, f_M over a decision space XX, the goal is to minimize F(x)=(f1(x),,fM(x))F(\mathbf{x}) = (f_1(\mathbf{x}), \ldots, f_M(\mathbf{x})) subject to xX\mathbf{x} \in X, identifying the Pareto set (PSPS) and Pareto front (PFPF). Conflicts among objectives preclude simultaneous minimization, inducing a need for algorithms to approximate the entire PFPF.

PFOPS Algorithmic Framework

PFOPS deploys a probabilistic search in the decision space using a sequence of target pdfs, each corresponding to a specific λ[0,1]\lambda \in [0,1] that balances objectives via path sampling. For bi-objective problems, the target pdfs are defined as

π~k(x)exp([(1λk)f1(x)+λkf2(x)]),\tilde{\pi}_k(\mathbf{x}) \propto \exp\left( -[(1-\lambda_k)f_1(\mathbf{x}) + \lambda_k f_2(\mathbf{x})] \right),

where the sequence {λk}k=1K\{\lambda_k\}_{k=1}^K interpolates between focusing solely on f1f_1 and f2f_2, respectively. Each pdf defines a unique trade-off, and drawing samples provides estimates of Pareto optimal solutions at different points on the PFPF.

The core algorithm comprises the following procedural steps for each pdf:

  1. Importance Sampling and Weighting: Particles are reweighted by their exponentiated negative (scalarized) objective, adjusted by λk\lambda_k.
  2. Resampling: Particles are replicated according to these weights to address degeneracy, eliminating low-weight candidates.
  3. Componentwise Metropolis Moves: A Metropolis-Hastings MCMC step is introduced for particle diversity, perturbing each decision vector dimension while maintaining invariance of the target distribution.

Key to PFOPS is that the sequence of pdfs is explicit and problem-adaptable, not requiring model fitting at each step as in estimation of distribution algorithms (EDAs). The probability densities guide exploration toward the PF in a manner that is amenable to mathematical analysis and performance guarantees.

For complex PF geometries (concave, discontinuous), PFOPS generalizes the proxy pdf using the Tchebycheff scalarization:

πk(x)=exp(max{(1λk)f1(x)z1,λkf2(x)z2}),\pi_k(\mathbf{x}) = \exp\left(-\max\{ (1-\lambda_k) |f_1(\mathbf{x}) - z_1^\star|, \lambda_k |f_2(\mathbf{x}) - z_2^\star| \} \right),

where z\mathbf{z}^\star is a utopian point, enhancing the method's ability to approximate general PF topologies.

Theoretical and Algorithmic Context

PFOPS stands in contrast to EC-based and EDA-based MOO schemes. Unlike EDAs, PFOPS obviates iterative model fitting—the set of target distributions is fully specified a priori, providing computational savings and stability. Compared to Bayesian optimization, PFOPS does not rely on surrogate modeling of expensive objectives, instead directly leveraging SMC for population-based search. The mathematical underpinnings (Bayesian filtering, SMC theory) provide a foundation for convergence analysis that is substantially more rigorous than meta-heuristic EC methods.

Empirical Evaluation

PFOPS was validated on three canonical bi-objective benchmarks spanning convex, concave, and discontinuous PFs:

  • Convex Case: Quadratic objectives over a bounded domain. Under sufficient-sampling, PFOPS' PF estimates were nearly indistinguishable from NSGA-II. Under undersampling (limited particles/iterations), PFOPS outperformed NSGA-II; the deviation from the true PF was consistently lower for PFOPS.
  • Concave and Discontinuous Cases: Fonseca-Fleming and Kursawe test functions were used, employing the Tchebycheff scalarization strategy. PFOPS and NSGA-II achieved comparable performance, accurately covering the PF in both topologically challenging cases.

Computational burdens (measured by fitness evaluations and wall-clock time) were matched between methods to ensure fairness.

Implications and Future Directions

PFOPS highlights the viability of a Bayesian, filtering-based foundation for population-based MOO. The path sampling approach allows for systematic, controlled exploration of the trade-off surface, facilitating precise PF estimation even under sample constraints. The avoidance of iterative model updating (cf. EDA, BO) enhances scalability and reproducibility. Practically, PFOPS provides strong results on low-sample budgets, a regime critical in high-dimensional and computation-constrained settings.

Theoretically, PFOPS offers a framework open to formal analysis. Future work should focus on:

  • Extending the path-sampling construction to higher (M>2M > 2) objective settings, addressing decomposition of the trade-off surfaces in higher dimensions.
  • Integrating simulated annealing strategies directly into the PFOPS sequence for improved global exploration.
  • Formalizing and characterizing the convergence rates and sample complexities relative to EC and EDA approaches.
  • Investigating other probabilistic constructions for target pdfs suitable for specific problem classes or constraints.

Conclusion

PFOPS constitutes a significant advancement in the application of PF and SMC techniques to MOO. By unifying SOO and MOO within one coherent framework based on Bayesian reasoning and population filtering, it opens a new direction for theoretically principled, flexible, and empirically robust multi-objective optimization methods. The formal structure and empirical findings suggest substantial promise for extension and further research.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.