Stochastic Optimization with Optimal Importance Sampling
(2504.03560v1)
Published 4 Apr 2025 in math.OC, cs.LG, math.ST, stat.ML, and stat.TH
Abstract: Importance Sampling (IS) is a widely used variance reduction technique for enhancing the efficiency of Monte Carlo methods, particularly in rare-event simulation and related applications. Despite its power, the performance of IS is often highly sensitive to the choice of the proposal distribution and frequently requires stochastic calibration techniques. While the design and analysis of IS have been extensively studied in estimation settings, applying IS within stochastic optimization introduces a unique challenge: the decision and the IS distribution are mutually dependent, creating a circular optimization structure. This interdependence complicates both the analysis of convergence for decision iterates and the efficiency of the IS scheme. In this paper, we propose an iterative gradient-based algorithm that jointly updates the decision variable and the IS distribution without requiring time-scale separation between the two. Our method achieves the lowest possible asymptotic variance and guarantees global convergence under convexity of the objective and mild assumptions on the IS distribution family. Furthermore, we show that these properties are preserved under linear constraints by incorporating a recent variant of Nesterov's dual averaging method.
Summary
The paper presents a joint update scheme for decision variables and IS parameters, eliminating nested loops and reducing variance.
It leverages a variant of Nesterov's Dual Averaging to efficiently handle linear constraints while updating both optimization and sampling variables.
Averaged iterates achieve theoretically optimal asymptotic performance, matching the variance of an ideal importance sampling distribution.
This paper introduces a novel algorithm for solving constrained convex stochastic optimization problems of the form θ∈ΘminEX∼P[F(θ,X)], where Θ={θ∈Rs:Aθ≤b}. The key challenge addressed is the high variance often encountered when using standard stochastic gradient methods, particularly when the expectation involves rare events. Importance Sampling (IS) is leveraged as a variance reduction technique.
The core difficulty with applying IS in optimization is the "curse of circularity": the optimal IS distribution depends on the unknown optimal solution θ⋆, while finding θ⋆ efficiently requires a good IS distribution. Existing methods often require nested loops, time-scale separation, or prior knowledge of the mapping from θ to the optimal IS parameters.
Key Contributions and Method
The paper proposes a single-loop iterative algorithm that jointly updates the decision variable θ and the parameters μ of an IS distribution Pμ from a predefined family M={μ∈Rm:Cμ≤d}. The algorithm avoids the need for time-scale separation or nested optimization.
Joint Update Scheme: The algorithm uses a variant of Nesterov's Dual Averaging (NDA) applied to the combined state vector (θ,μ). The update rule is:
Gk=Gμk(θk,Xk+1(μk))=G(θk,Xk+1(μk))ℓ(Xk+1(μk),μk). This is the IS gradient for the primary objective f(θ). It uses a sample Xk+1(μk) drawn from the current IS distribution Pμk. G(θ,x)=∇θF(θ,x) and ℓ(x,μ)=dP/dPμ(x) is the likelihood ratio.
Hk=H(θk,μk,Xk+1)=∥PAaθkG(θk,Xk+1)∥2∇μℓ(Xk+1,μk). This is the stochastic gradient for the IS parameter update. It aims to minimize the variance v(θ,μ)=EX∼P[∥PAaθG(θ,X)∥2ℓ(X,μ)], specifically evaluated at θ⋆. It uses a sample Xk+1 drawn from the original distribution P. PAaθ is the projector onto the null space of the active constraints at θ.
Theoretical Guarantees:
Global Convergence: Under convexity of f(θ) and log-convexity of ℓ(x,μ) w.r.t μ, along with other regularity conditions, the iterates (θn,μn) converge almost surely to (θ⋆,μ⋆), where θ⋆ minimizes f(θ) and μ⋆ minimizes the asymptotic variance v(θ⋆,μ).
Asymptotic Optimality: The averaged iterates θˉn achieve the minimal possible asymptotic variance among all methods using the given IS family {Pμ}. The central limit theorem (CLT) holds:
n(θˉn−θ⋆)→dN(0,ΣG⋆)
where $\Sigma_G^\star = \text{Q}^{\dagger}\, \text{Var}_{X^{(\mu^\star)} \sim \mathbb P_{\mu^\star}\left[G_{\mu^\star}(\theta^\star, X^{(\mu^\star)})\right] \text{Q}^{\dagger}$ and Q=PAa⋆∇2f(θ⋆)PAa⋆. This matches the variance achievable if the optimal IS distribution Pμ⋆ were known beforehand.
Constraint Handling: The use of NDA ensures the method correctly handles linear constraints on both θ and μ, and identifies the active constraints in finite time almost surely.
Practical Implementation Details
Applicable IS Families: The method works for common IS families where ℓ(x,μ) is log-convex and differentiable in μ, such as:
Exponential Tilting (ET)
Mean Translation (MT) for log-concave base distributions
Mixture Models
Computational Requirements:
Ability to sample from the original distribution P.
Ability to sample from the IS distribution Pμ for any feasible μ.
Ability to compute the gradient G(θ,x)=∇θF(θ,x).
Ability to compute the likelihood ratio ℓ(x,μ) and its gradient ∇μℓ(x,μ).
Ability to compute the active constraint projector PAaθ (though the analysis relies on PAa⋆, the algorithm uses the current iterate's active set).
Solving the NDA subproblem at each iteration, which involves minimizing a quadratic function over the feasible set Θ×M. This is often a projection-like operation.
Assumptions for Implementation: The underlying objective f(θ) must be convex. The chosen IS family must satisfy the log-convexity and differentiability assumption on ℓ(x,μ). The feasible sets Θ and M must be convex, closed, and bounded polytopes (defined by linear inequalities).
Secondary IS: The paper notes that the variance of the Hk gradient itself could be reduced using another layer of IS (secondary IS), suggesting potential strategies but not exploring them in depth.
Summary of Benefits
Provides a principled way to adapt the IS distribution during optimization without needing prior knowledge of the optimal IS parameters.
Achieves theoretically optimal asymptotic performance within the chosen IS family.
Unified single-loop approach simplifies implementation compared to multi-level or alternating methods.
Naturally handles linear constraints on both decision and IS parameters.
This work offers a theoretically sound framework for integrating adaptive importance sampling into constrained stochastic optimization, potentially leading to significant efficiency gains in problems with high variance or rare events.