Papers
Topics
Authors
Recent
2000 character limit reached

Bruss's Odds Theorem & Optimal Stopping Rule

Updated 27 November 2025
  • Bruss's Odds Theorem is a fundamental result in optimal stopping theory that defines a sum-of-odds rule to maximize the chance of stopping on the final success in a sequence.
  • It uses dynamic programming and backward induction to establish a unique threshold, offering computational simplicity and clear quantitative probability bounds.
  • The theorem has practical applications in secretary problems, clinical trials, and various online selection scenarios, ensuring optimal decision-making across diverse settings.

Bruss's Odds Theorem is a fundamental result in optimal stopping theory addressing sequential selection from finite sequences of independent Bernoulli trials. Its core contribution is the identification of an explicit rule—the "sum-of-odds" strategy—for maximizing the probability of successfully stopping on the very last occurrence of a specified event, which is mathematically formalized as maximizing the probability of stopping on the final "success" (i.e., the last 1) in a sequence of 0–1 (failure–success) observations. The theorem's generality and computational simplicity have established it as a central tool in both theoretical and applied sequential decision problems, including secretary problems, search theory, and clinical trials (Dendievel, 2012, Bruss, 2019).

1. Mathematical Formulation and Statement

Let I1,,InI_1,\ldots,I_n be independent Bernoulli random variables on a probability space (Ω,F,P)(\Omega, \mathfrak F, \mathbf P), with success probabilities pj=P(Ij=1)p_j = \mathbf P(I_j=1), and qj=1pjq_j = 1-p_j. The objective is to select a stopping time τ\tau relative to the natural filtration so as to maximize the probability of stopping exactly on the final 1: Vn=maxτP(Iτ=1,Iτ+1=0,,In=0).V_n = \max_\tau \mathbf P(I_\tau=1,\, I_{\tau+1}=0,\ldots,I_n=0). Bruss's Odds Theorem emphasizes the use of tail odds ratios: rj=pjqj,R=j=nrj.r_j = \frac{p_j}{q_j}, \qquad R_\ell = \sum_{j=\ell}^n r_j. The optimal strategy is constructed as follows:

  • Define the threshold s=max{1,max{:R1}}s = \max\left\{1, \max\{\ell : R_\ell\ge1\}\right\}.
  • The optimal stopping rule is τ=min{ks:Ik=1}\tau = \min\{k\ge s : I_k=1\}, or τ=n\tau = n if no such kk exists.

The maximal success probability is

Vn=j=snqjRs=Rsj=sn(1+rj).V_n = \prod_{j=s}^n q_j \cdot R_s = \frac{R_s}{\prod_{j=s}^n(1 + r_j)}.

This threshold and win-probability formula, and their equivalence across multiple paper variants, are thoroughly established in (Kabaeva et al., 26 Nov 2025, Dendievel, 2012, Bruss, 2019, ribas, 2018, Ribas, 2018).

2. Proof Technique and Structural Properties

The optimality proof leverages dynamic programming and one-step look-ahead arguments. Central is the monotonicity of the tail odds sum RjR_j, with the backward induction establishing that after a unique threshold index ss, immediate stopping on a success dominates continuation. At each time, the choice is between stopping immediately (with probability j=k+1nqj\prod_{j=k+1}^n q_j) and continuing (leading to the recursive value Vk+1V_{k+1}), showing that the policy region where it is optimal to stop comprises a single contiguous block [s,n][s, n] (Dendievel, 2012, Bruss, 2019, Ribas, 2018, ribas, 2018).

3. Quantitative Success Probability Bounds

Bruss's Odds Theorem yields precise and, in many cases, sharp probabilistic bounds for the probability of success under the optimal rule:

  • Upper Bound:

VnRs1+Rs,V_n \le \frac{R_s}{1+R_s},

with equality if all odds mass is concentrated at ss and rj=0r_j=0 for j>sj>s.

  • Lower Bounds:

    • If 0Rs<10 \leq R_s < 1, then s=1s=1, and

    VnR1(1+R1n)n,V_n \geq R_1\left(1 + \frac{R_1}{n}\right)^{-n},

    with equality when all rjr_j equal R1/nR_1/n. - If 1Rs1+1/(ns)1 \le R_s \le 1 + 1/(n-s),

    Vn(1+1ns+1)(ns+1),V_n \ge \left(1 + \frac{1}{n-s+1}\right)^{-(n-s+1)},

    with equality for rj=1/(ns+1)r_j = 1/(n-s+1) for jsj\ge s. - For Rs>1+1/(ns)R_s > 1 + 1/(n-s), similar constructions provide bounds with sharpness achieved in limiting configurations.

The classical universal bound Vn1/eV_n \ge 1/e when R11R_1 \ge 1 follows from convexity via

VnRseRs1/eV_n \ge R_s e^{-R_s} \ge 1/e

and is unattainable, but approached in the i.i.d. Bernoulli limit (Kabaeva et al., 26 Nov 2025, Ribas, 2018).

4. Extensions and Generalizations

Bruss's theorem underpins an extensive literature of extensions and modifications:

  • Weighted Payoffs: For weights wk>0w_k>0, the threshold becomes the last index for which the weighted odds sum W(k)=j=knwjpj/(1pj)W(k) = \sum_{j=k}^n w_j p_j/(1-p_j) exceeds wk1w_{k-1}. The optimal stopping rule and expected reward can still be written in terms of these sums (ribas, 2018).
  • Stopping on the m-th Last Success: The optimal rule generalizes to summing m-fold products of the odds, yielding analogous sum-of-multiplicative-odds thresholds (Dendievel, 2012).
  • Dependent Structures: Markovian (e.g., two-state Markov chains), continuous-time (Poisson process with unknown intensity), and adversarial variants retain versions of the sum-of-odds criterion under suitably generalized conditions (Dendievel, 2012).
  • Multiple Stopping Opportunities: With up to mm choices, recursive auxiliary odds-sums produce multiple thresholds for sequential actions (Dendievel, 2012).

5. Monotonicity, Structure, and Computational Aspects

The win-probability VnV_n and threshold ss exhibit predictable monotonicity with respect to changes in the underlying sequence (pk)(p_k):

  • If (pn)(p_n) is non-increasing, the optimal value is non-increasing and threshold increments are tightly controlled.
  • If (pn)(p_n) is non-decreasing, thresholds and values increase.
  • The unimodality of the mapping kQ(k,n)R(k,n)k \mapsto Q(k,n)R(k,n) ensures the uniqueness and robustness of the threshold (Bruss, 2019). The computational recipe is recursive and efficient: backwards summing of the odds determines the threshold in O(n)O(n) time.

6. Canonical Examples and Practical Implications

The theorem provides the optimal solution for the classical secretary problem, where

pj=1j,rj=1j1,Rs=Hn1Hs2p_j = \frac1j, \quad r_j = \frac{1}{j-1},\quad R_s = H_{n-1} - H_{s-2}

and the success probability asymptotically approaches $1/e$ as nn \to \infty. Applications include:

  • Secretary and Best-Choice Problems: Derivation and justification of the celebrated $1/e$-law.
  • Clinical Trials: Selection policies that minimize the risk of unnecessary further treatment after the last observed benefit (Bruss, 2019).
  • Online Selection and Search: Robotic maintenance, investment timing, and other contexts where one-stop decisions must be made in real-time (Dendievel, 2012).

7. Variants, Further Bounds, and Limitations

Recent works improve lower bounds in subcritical regimes (R1<1R_1<1), analyze variants such as repeating the process until a success is observed, and consider options to predict no successes. These modifications continue to guarantee nontrivial lower bounds on win probabilities, with optimal rules expressible in the sum-of-odds paradigm (Ribas, 2018).

The theorem's power is matched by clear quantitative limits: the achievable win-probability is bounded above by Rs1+Rs\frac{R_s}{1+R_s} and below by regime-specific, often tight, analytic expressions. In all cases, sharpness is either achieved or approached asymptotically via explicit constructions (Kabaeva et al., 26 Nov 2025). The universality and robustness of the sum-of-odds method provide both theoretical insight and practical stopping heuristics that are optimal for a broad class of sequential decision problems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (5)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Bruss's Odds Theorem.