Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 83 tok/s
Gemini 2.5 Pro 42 tok/s Pro
GPT-5 Medium 31 tok/s
GPT-5 High 34 tok/s Pro
GPT-4o 94 tok/s
GPT OSS 120B 457 tok/s Pro
Kimi K2 210 tok/s Pro
2000 character limit reached

Forward-Backward Quantization of Scenario Processes in Multi-Stage Stochastic Optimization (2508.18112v1)

Published 25 Aug 2025 in math.OC

Abstract: Multi-stage stochastic optimization lies at the core of decision-making under uncertainty. As the analytical solution is available only in exceptional cases, dynamic optimization aims to efficiently find approximations but often neglects non-Markovian time-interdependencies. Methods on scenario trees can represent such interdependencies but are subject to the curse of dimensionality. To ease this problem, researchers typically approximate the uncertainty by smaller but more accurate trees. In this article, we focus on multi-stage optimal tree quantization methods of time-interdependent stochastic processes, for which we develop novel bounds and demonstrate that the upper bound can be minimized via projected gradient descent incorporating the tree structure as linear constraints. Consequently, we propose an efficient quantization procedure, which improves forward-looking samples using a backward step on the tree.We apply the results to the multi-stage inventory control with time-interdependent demand. For the case with one product, we benchmark the approximation because the problem allows a solution in closed-form. For the multi-dimensional problem, our solution found by optimal discrete approximation demonstrates the importance of holding mitigation inventory in different phases of the product life cycle.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)

X Twitter Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube