Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 81 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 37 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 110 tok/s Pro
Kimi K2 219 tok/s Pro
GPT OSS 120B 444 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Stochastic Dynamic Linear Programming: A Sequential Sampling Algorithm for Multistage Stochastic Linear Programming (2010.12129v1)

Published 23 Oct 2020 in math.OC

Abstract: Multistage stochastic programming deals with operational and planning problems that involve a sequence of decisions over time while responding to realizations that are uncertain. Algorithms designed to address multistage stochastic linear programming (MSLP) problems often rely upon scenario trees to represent the underlying stochastic process. When this process exhibits stagewise independence, sampling-based techniques, particularly the stochastic dual dynamic programming (SDDP) algorithm, have received wide acceptance. However, these sampling-based methods still operate with a deterministic representation of the problem that uses the so-called sample average approximation. In this work, we present a sequential sampling approach for MSLP problems that allows the decision process to assimilate newly sampled data recursively. We refer to this method as the stochastic dynamic linear programming (SDLP) algorithm. Since we use sequential sampling, the algorithm does not necessitate a priori representation of uncertainty, either through a scenario tree or sample average approximation, both of which require a knowledge/estimation of the underlying distribution. In this regard, SDLP is a sequential sampling approach to address MSLP problems. This method constitutes a generalization of the Stochastic Decomposition (SD) for two-stage SLP models. We employ quadratic regularization for optimization problems in the non-terminal stages. Furthermore, we introduce the notion of basic feasible policies which provide a piecewise-affine solution discovery scheme, that is embedded within the optimization algorithm to identify incumbent solutions used for regularization. Finally, we show that the SDLP algorithm provides a sequence of decisions and corresponding value function estimates along a sequence of state trajectories that asymptotically converge to their optimal counterparts, with probability one.

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube