Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 108 tok/s
Gemini 3.0 Pro 55 tok/s Pro
Gemini 2.5 Flash 145 tok/s Pro
Kimi K2 205 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Markov Decision Processes (MDPs) Overview

Updated 18 November 2025
  • Markov Decision Processes (MDPs) are discrete-time stochastic models for sequential decision-making, defined by states, actions, rewards, and discount factors.
  • They underpin reinforcement learning and operations research, providing frameworks for robust, risk-sensitive, and high-dimensional control problems.
  • Recent developments include measurized MDPs, probabilistic constraints, and representation learning techniques that enhance policy synthesis and optimization.

A Markov Decision Process (MDP) is a discrete-time stochastic control process that models sequential decision-making under uncertainty. An MDP is defined by a state space, an action space, a transition mechanism, a reward function, and a discount factor. The aim is to synthesize policies that optimize an expected cumulative objective. MDPs form the core mathematical object in stochastic control, reinforcement learning, and operations research. The contemporary literature extends the MDP framework to incorporate general state/action spaces, robust and risk-sensitive objectives, learning from incomplete information, and domain-specific constraints.

1. Formal Structure and Canonical Properties

Let (S,B(S))(S, \mathcal{B}(S)) be a Borel state space and UU a Borel action space. The canonical, discrete-time MDP is a tuple

(S,U,Q,r,α)\big(S,\, U,\, Q,\, r,\, \alpha\big)

where

  • Q(s,u)Q(\cdot|s,u) is a Markov transition kernel: for each sSs \in S, uUu \in U, Q(s,u)Q(\cdot|s,u) is a probability measure on SS;
  • r:S×URr : S \times U \rightarrow \mathbb{R} is a one-stage reward;
  • α(0,1)\alpha \in (0,1) is a discount factor.

A policy is a measurable mapping π:SP(U)\pi : S \rightarrow \mathcal{P}(U) assigning actions (possibly randomized) to each state. The value function under policy π\pi is

Vπ(s)=Eπ[t=0αtr(st,at)s0=s].V^{\pi}(s) = \mathbb{E}_\pi\left[\,\sum_{t=0}^{\infty}\alpha^t r(s_t, a_t) \,\mid\, s_0 = s\,\right].

The optimal value function satisfies the Bellman optimality equations:

V(s)=supuU{r(s,u)+αSV(s)Q(dss,u)}.V^*(s) = \sup_{u \in U} \Big\{ r(s,u) + \alpha \int_S V^*(s') Q(ds'|s,u) \Big\}.

For UU finite and SS countable, these reduce to the classical Bellman recurrences. Similar expressions underly the average-reward and constrained MDPs.

2. Existence, Measurability, and Semicontinuous–Semicompact Framework

When SS and UU are general Borel spaces, issues of measurability and continuity become paramount. The semicontinuous–semicompact framework, as developed by Hernández-Lerma and Lasserre, casts the MDP in a setting where the reward r(s,u)r(s,u) is upper semicontinuous and bounded above, and the transition kernel Q(s,u)Q(\cdot|s,u) depends continuously (in the weak topology) on (s,u)(s,u). Under these and mild integrability assumptions, the Bellman operator admits fixed points in the space of bounded Borel-measurable functions; optimal measurable selectors exist, yielding stationary optimal policies. This facilitates extension to constrained and infinite-horizon models without the technicalities of universally measurable selectors (Adelman et al., 6 May 2024).

3. Lifting and the Measurized MDP Formalism

A crucial generalization is the measurized MDP, whereby the state space is lifted from points sSs \in S to probability measures ν\nu on SS, formulated within the weak topology. The measurized MDP is specified by the tuple

(M,Φ,{Φ(ν)}νM,q,r),\big(M,\, \Phi,\, \{\Phi(\nu)\}_{\nu \in M},\, q,\, r \big),

where MM is the set of probability measures on (S,B(S))(S, \mathcal{B}(S)), Φ\Phi is the set of Markov decision rules (stochastic kernels), qq encodes deterministic transitions on MM via

F(ν,ϕ)()=SU(s)Q(s,u)ϕ(dus)ν(ds),F(\nu, \phi)(\cdot) = \int_S \int_{U(s)} Q(\cdot|s,u)\, \phi(du|s)\, \nu(ds),

and r(ν,ϕ)=SU(s)r(s,u)ϕ(dus)ν(ds)r(\nu,\phi) = \int_S \int_{U(s)} r(s,u)\, \phi(du|s)\, \nu(ds) is the lifted one-stage reward. The Bellman equations in this setting admit the same structure, with value functions VCb(M)V^*\in C_b(M) (bounded Borel-measurable functions on MM), and Borel-measurable selectors guaranteeing optimal stationary product policies. For Dirac-δs\delta_s measures, this framework recovers the original MDP, without loss of fidelity:

V(δs)=Vorig(s),andV(ν)=SVorig(s)ν(ds).V^*(\delta_s) = V^{\text{orig}}(s),\quad \text{and}\quad V^*(\nu) = \int_S V^{\text{orig}}(s)\,\nu(ds).

The framework naturally incorporates external random shocks, risk or chance constraints, and supports nuanced approximation architectures (Adelman et al., 6 May 2024).

4. Constraints and Value Function Approximations

The lifting approach enables constraints and approximations not expressible in the classical space:

  • Risk constraints (e.g., CVaR): By restricting Φ(ν)\Phi(\nu) to policies satisfying

CVaRβ(c;ν,ϕ)θ,\operatorname{CVaR}_\beta(c;\nu,\phi) \leq \theta,

with c(s,u)c(s,u) the cost and CVaRβ\operatorname{CVaR}_\beta defined in terms of the conditional tail expectation. The Bellman equation then optimizes over this risk-constrained policy set.

  • Probabilistic state constraints: For instance, bounding the variance Varν,ϕ(u)σ2\operatorname{Var}_{\nu,\phi}(u) \leq \sigma^2 further restricts feasible ϕ\phi, affecting the supremum in the Bellman update.
  • Value function approximations: By expanding V(ν)V^*(\nu) in basis functions (moments, Laplace transforms, divergences) and solving for weights ww in

V(ν)kwkψk(ν),V(\nu) \approx \sum_k w_k \psi_k(\nu),

the MDP solution reduces to a convex optimization over the function class (Adelman et al., 6 May 2024).

5. Representation Learning and State Compression

A central technical challenge in high-dimensional domains is constructing low-dimensional, Markovian feature representations. The formalism of feature Markov Decision Processes (Φ\PhiMDPs) encodes any history compression Φ:HS\Phi:\mathcal{H}\to S (from history space to state space) and defines penalized code-length cost functions to score candidate Φ\Phi. The optimal Φ\Phi minimizes this cost, yielding an induced process that is approximately Markov (0812.4580). Alternating deep neural networks (ADNN) further automate this discovery, learning encoders ϕ\phi such that the reduced process is itself Markov and sufficient for optimal control; conditional independence criteria (residual tests, Brownian distance covariance) ensure fidelity with the original process, and group-lasso regularization yields sparsity for interpretability (Wang et al., 2017).

6. Robust, Risk-Sensitive, and Distributionally Robust Extensions

MDPs are generalized to address model uncertainty (robust MDPs, RMDPs) and risk via recursive risk measures:

  • Robust MDPs: Transition probabilities are uncertain within nonempty convex sets U(s,a)U(s,a), yielding robust Bellman equations of the form

V(s)=maxa  minPU(s,a){R(s,a)+γsP(ss,a)V(s)},V^*(s) = \max_{a}\;\min_{P\in U(s,a)}\Big\{ R(s,a) + \gamma \sum_{s'} P(s'|s,a)V^*(s') \Big\},

which is equivalent to a zero-sum stochastic game with nature (Suilen et al., 18 Nov 2024).

  • Risk-sensitive MDPs: The objective is the recursive application of coherent, law-invariant risk measures ρ\rho; the value function solves

v(x)=infaD(x)  ρ(c(x,a,T(x,a,Z))+βv(T(x,a,Z))).v^*(x) = \inf_{a\in D(x)}\;\rho\Big( c(x,a,T(x,a,Z)) + \beta\,v^*(T(x,a,Z)) \Big).

Under mild contractivity, unique fixed points and optimal stationary policies exist; the dual representation of ρ\rho provides a direct link to distributionally robust control (Bäuerle et al., 2020).

  • Distributionally robust chance-constrained MDPs: Ambiguity in the reward distribution is handled via moment, φ\varphi-divergence or Wasserstein sets. The robust optimization reduces to tractable SOCP/MISOCP or copositive programs, yielding policies with formal chance constraint guarantees (Nguyen et al., 2022).

7. Learning Under Partial Knowledge, Unawareness, and Nonstandard Extensions

MDPs have been extended to address partial knowledge, unawareness, non-cumulative objectives, and non-stationarity:

  • Learning with Unawareness: In MDPs with unawareness (MDPUs), the agent is initially unaware of part of the action set. Discovery is modelled via an “explore” action and a stochastic discovery process D(j,t)D(j,t). Near-optimal play is possible iff tD(1,t)=\sum_t D(1,t) = \infty, with polynomial-time learning characterized by the growth rate of this sum (Halpern et al., 2014).
  • Non-cumulative Objectives: For decision processes with non-cumulative return functions (e.g., maximizing the maximum reward over time), construction of a “lifted” MDP with an augmented state capturing the statistic of the past reward sequence enables reduction to standard RL and dynamic programming algorithms—retaining optimality (Nägele et al., 22 May 2024).
  • Non-stationary or Externally Modulated MDPs: When transitions depend on external temporal processes or histories, the formalism augments the state with finite-memory histories; under suitable decay of exogenous effects, optimality is achieved via truncated history, with error bounded in terms of total variation (Ayyagari et al., 2023).

In summary, the theory of MDPs encompasses discrete- and continuous-parameter models, broad classes of uncertainties and constraints, and advanced frameworks for representation, approximation, and robust/learning-based control. Lifting to measure spaces, compositional risk constraints, and function-approximation architectures exemplify ongoing innovations (Adelman et al., 6 May 2024). The contemporary MDP is not only a canonical model for sequential stochastic decision making but also a substrate on which generalizations for robustness, risk sensitivity, high-dimensionality, unawareness, and statistical learning are rigorously built.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Markov Decision Processes (MDPs).