Papers
Topics
Authors
Recent
Search
2000 character limit reached

Random-Turn Games: Dynamics & Analysis

Updated 19 January 2026
  • Random-turn games are multi-agent games where move order is decided by chance, fundamentally altering traditional strategic play.
  • They encompass models like classic random-turn, first-visit, and a priori assignment, each offering unique insights into game dynamics and algorithmic complexity.
  • These games connect to Markov decision processes, continuous bidding strategies, and PDE limits, highlighting applications in combinatorial game theory and phase transitions.

A random-turn game is a multi-agent setting in which, at each move, the identity of the moving player is decided by a random process, typically an independent coin flip (potentially biased), as opposed to deterministic alternating turns. This stochastic control of move order fundamentally alters standard combinatorial and positional game dynamics and induces probabilistic Markov processes, with critical consequences for strategy, thresholds, values, algorithmic complexity, and the structure of optimal play.

1. Definitions and Core Models

The canonical form of a random-turn game is as follows: let G=(V,E)G = (V, E) be a finite directed graph (the "arena"). Each position corresponds to a vertex. At each step, a coin is tossed; with probability pp, Player I (Max, Maker, attacker, etc.) moves the token, and with probability $1-p$, Player II (Min, Breaker, defender, etc.) does so. The play thus generates a (finite or infinite) history, with the objective and payoff depending on the specific class of game: reachability, parity, mean-payoff, or combinatorial win condition (Avni et al., 2017, Avni et al., 2019, Avni et al., 2018, Devlin et al., 2024).

Alternative randomization models include:

  • Classic random-turn: At each visit to a node, a new coin flip determines control.
  • First-visit assignment: Ownership is randomized only on first visit and then fixed thereafter.
  • A priori assignment: All ownerships are randomly assigned before play and fixed throughout (Bahmani et al., 12 Jan 2026).

Random-turn play thus forms a Markov chain (in the case of finite, acyclic games) or a Markov Decision Process/Stochastic Game (in infinite-duration or cyclic arenas).

2. Combinatorial Games: Chomp and Nim under Random Play

Devlin–Trifonova's analysis of random-turn Chomp and Nim yields explicit formulas for both the expected duration and win probabilities under uniform random-move selection. For an impartial game with position set BB, and successors DBD \lessdot B:

  • Expected turns (markovian recurrence):

E[B]=1+1BDBE[D]\mathbb E[B]=1+\frac1{|B|}\sum_{D\lessdot B}\mathbb E[D]

For Chomp, the closed solution is:

E[B]=(x,y)B1xy\mathbb E[B]=\sum_{(x,y)\in B}\frac{1}{x y}

For an m×nm\times n board,

E[rectangle m×n]=HmHn,Hk=r=1k1r\mathbb E[\text{rectangle }m\times n]=H_m H_n,\quad H_k=\sum_{r=1}^k \frac{1}{r}

  • Two-row Chomp win probabilities:

For nkn\ge k,

P(n,k)=12nαk+βk(2(k1))!(n+k)(n+k1)(n+k2)P(n,k)=\frac12-\frac{n\alpha_k+\beta_k}{(2(k-1))!(n+k)(n+k-1)(n+k-2)}

where (αk,βk)(\alpha_k, \beta_k) are integer recurrences (see (Devlin et al., 2024) for initial values and recursions).

  • Random Nim:

Each pile acts as one-row Chomp. The expected total turns is the sum of the individual pile harmonic numbers:

E[nim with s1,,sk]=i=1kHsi\mathbb E[\text{nim with } s_1,\dots,s_k]=\sum_{i=1}^k H_{s_i}

Win probability is $1/2$ for nontrivial positions (i.e., any pile >1>1).

Significantly, random play collapses the usual first-player advantage: for board configurations with any nontrivial choice (e.g., k1k\geq1 in two-row Chomp), both players have a win probability of exactly $1/2$ (Devlin et al., 2024).

3. Random-Turn Graph Games and Stochastic Bidding Equivalences

Several classes of graph games—reachability, parity, and mean-payoff—possess a deep equivalence between random-turn models and continuous bidding mechanisms (notably Richman and poorman bidding):

  • Richman bidding associates a fair (unbiased coin) random-turn process. For reachability, the "threshold budget" Th(v)\mathrm{Th}(v) at vertex vv satisfies:

Th(v)=1val(RT(G),v),val(RT(G),v)=12(val(v+)+val(v))\mathrm{Th}(v)=1-\mathrm{val}(RT(G),v),\qquad \mathrm{val}(RT(G),v)=\tfrac{1}{2}\left(\mathrm{val}(v^+)+\mathrm{val}(v^-)\right)

where v+,vv^+, v^- are maximal/minimal value successors.

  • Poorman bidding connects to biased coins: the optimal value with initial budget ratio rr equals the random-turn value with bias p=rp=r (Avni et al., 2018, Avni et al., 2019). For mean-payoff and parity games, this connection enables transfer of value and optimal strategies (potential and strength) between auction-style bidding and random-turn Markovian analysis.
  • Taxman bidding interpolates with a coin bias F(τ,r)=r+τ(1r)1+τF(\tau, r) = \frac{r+\tau(1-r)}{1+\tau} (Avni et al., 2019).

Explicitly, in two-vertex mean-payoff examples, all three mechanisms map exactly to the expectation $2p-1$ (pp the effective coin probability for Player 1).

4. Random-Turn Positional Games: Maker-Breaker Games

Random-turn variants of classical positional games involve a coin-flip before each claim, generalizing Maker–Breaker games. Thresholds for transition between typical Maker/Breaker wins correspond to key random-graph thresholds:

Game Random-turn threshold p(n)p^*(n)
Box (intact set) pslnnps\approx \ln n
Hamilton cycle on KnK_n pc(n)=lnn+lnlnnnp_c(n)=\frac{\ln n+\ln\ln n}{n}
kk-connectivity on KnK_n pc(n)=lnn+(k1)lnlnnnp_c(n)=\frac{\ln n+(k-1)\ln\ln n}{n}

Efficient randomized strategies can match the threshold up to constants, using Chernoff bounds and Box-game block decompositions. The existence of these explicit polynomial-time algorithms demonstrates that the randomization of turn-order does not significantly change the threshold regime, but drastically alters the strategy space and removes deterministic first/second player biases (Ferber et al., 2014).

5. Infinite-Duration Random-Turn Games and Complexity

For infinite-duration games on graphs (reachability, parity, energy/mean-payoff), control assignment via random turns introduces several variants with distinct computational complexity profiles (Bahmani et al., 12 Jan 2026):

  • Classic random-turn (re-flip on each visit): Qualitative almost-sure win is NL-complete for all objectives.
  • First-visit assignment: Quantitative threshold problems are PSPACE-complete for deciding if the maximizer can win with probability at least θ\theta.
  • A priori assignment: Exact winning probability computation is P-complete; a randomized approximation scheme via Monte Carlo efficiently estimates values, with sample complexity O(ϵ2log(1/δ))O(\epsilon^{-2}\log(1/\delta)) for additive error ϵ\epsilon and confidence 1δ1-\delta.

Memoryless strategies suffice in all cases, and critical recursions reduce to linear (for classical) and alternating polynomial-time checks (for first-visit assignments).

6. Thresholds and Phase Transitions in Random-Turn Minimax Games

Pearl's tree-based analysis and recent work on non-tree random-turn graphs (e.g., ABn{\rm AB}_n, Abn{\rm Ab}_n) demonstrate the existence of sharp threshold phenomena in the win-probability as the underlying randomness parameter pp crosses a critical value pcp_c (Cardona-Tobón et al., 2024):

  • Pearl case (tree): For p>pcp>p_c, Player II has a winning strategy a.a.s.; for p<pcp<p_c, Player I does.
  • Non-tree cases: Sharp thresholds persist, but the critical regimes and limiting probabilities at threshold differ. Total and maximal pivotality influence (via BKKKL and Russo's formula) control window width, which is of order O(1/n)O(1/n) for graph depth nn.
  • Open questions: Precise values of thresholds, sharpness exponent, and behavior of Pn(pc)P_n(p_c).

These analyses illuminate stochastic phases and transitions in random-turn settings, of fundamental interest in probabilistic Boolean function analysis and connection to cellular automata.

7. Random-Turn Games in Continuous and PDE Settings

Random-turn Tug-of-War games, in which players randomly decide token moves over graphs or metric spaces, have connections to the theory of infinity harmonic functions and extremal partial differential equations (Antón et al., 2019). The dynamic programming principle (DPP) for value functions in graph games takes the form:

u(i)=12(maxjiu(j)+minjiu(j))u(i) = \frac12 (\max_{j\sim i} u(j) + \min_{j\sim i} u(j))

Uniqueness and existence follow from discrete comparison principles. The continuum limit connects to solutions of the Jensen extremal PDE:

min{u1, ΔNu}=0\min\{|\nabla u| - 1,\ -\Delta^N_\infty u\} = 0

This provides a rigorous framework linking stochastic random-turn gameplay to viscosity solutions and variational analysis.


References:

(Avni et al., 2017, Avni et al., 2019, Avni et al., 2018, Devlin et al., 2024, Bahmani et al., 12 Jan 2026, Cardona-Tobón et al., 2024, Ferber et al., 2014, Arieli et al., 2015, Antón et al., 2019)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Random-Turn Games.