Chance-constrained optimization with tight confidence bounds (1711.03747v3)
Abstract: Convex sample approximations of chance-constrained optimization problems are considered, in which chance constraints are replaced by sets of sampled constraints. We propose a randomized sample selection strategy that allows tight bounds to be derived on the probability that the solution of the sample approximation is feasible for the original chance constraints when a subset of the sampled constraints is discarded. These confidence bounds are shown to be tighter than the bounds that apply if constraints are discarded according to optimal or greedy discarding strategies. We further show that the same confidence bounds apply to solutions that are obtained from a two stage process in which a sample approximation of a chance-constrained problem is solved, then an empirical measure of the violation probability of the solution is obtained by counting the number of violations of an additional set of sampled constraints. We use this result to design a repetitive scenario approach that meets required tolerances on violation probability given any specified a priori and a posteriori probabilities. These bounds are tighter than confidence bounds available for previously proposed repetitive scenario approaches, and we show that the posterior bounds are exact for a particular problem subclass. The approach is illustrated through numerical examples, and extensions to problems involving multiple chance constraints are discussed.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.