Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 81 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 432 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Scaled-Attachment Random Recursive Trees

Updated 23 October 2025
  • Scaled-Attachment Random Recursive Trees (SARRTs) are random tree models where each new node attaches using a scaled index from an i.i.d. random variable, generalizing uniform recursive trees.
  • The construction interprets node depths as a renewal process with increments defined by -log(X), leading to precise asymptotic laws for typical depth and extremal branch lengths.
  • SARRTs extend naturally to biased, power-of-choice, and greedy models, linking recursive network growth with continuous-time branching, martingale methods, and large deviation techniques.

Scaled-Attachment Random Recursive Trees (SARRTs) are random tree models that generalize classical recursive tree dynamics by coupling attachment decisions to scaled random processes. Formally, in a SARRT on vertices labelled $0, 1, ..., n$, each node i1i \geq 1 connects to a parent given by parent(i)=iXi\operatorname{parent}(i) = \lfloor i X_i \rfloor, where {Xi}i=1n\{X_i\}_{i=1}^n are i.i.d. random variables sampled from a distribution on [0,1)[0,1) (Devroye et al., 2012). This framework encompasses uniform random recursive trees (URRTs) as the special case when XUniform[0,1)X \sim \operatorname{Uniform}[0,1), and can be extended to various "power-of-choice" and greedy structures. The model facilitates an elementary yet powerful analysis of distances and depths in recursively growing networks, revealing rich connections to renewal theory, large deviations, and probabilistic combinatorics.

1. Formal Construction and Recurrence

In a SARRT, the growth mechanism is defined by a scaled random attachment kernel. Node ii attaches to iXi\lfloor i X_i \rfloor, interpreting this as a renewal step with random step-size logXi\log X_i. The process recursively yields a parent sequence: iiXiiXiXL(i,1)i \mapsto \lfloor i X_i \rfloor \mapsto \lfloor \lfloor i X_i \rfloor X_{L(i,1)} \rfloor \mapsto \cdots where L(i,k)L(i,k) denotes the ancestor at kk steps above ii.

This rule interpolates between uniform recursive trees (no bias, all previous nodes equally likely as parent) and heavily-biased trees (when XX is deterministic or strongly concentrated near $0$ or $1$), and can emulate attachment probability dictated by node degree, index, weight, or fitness.

2. Depth Distribution, Renewal Theory, and Central Limit Behavior

The depth DnD_n of node nn (distance to the root) behaves as a renewal process with increments distributed as logX-\log X. Define μ=E[logX]\mu = \mathbb{E}[-\log X] and σ2=Var[logX]\sigma^2 = \operatorname{Var}[-\log X] (finite when XX is sufficiently regular). Applying renewal theory yields: Dnμ1logn(in probability)D_n \sim \mu^{-1} \log n \quad \text{(in probability)} and, if σ2<\sigma^2 < \infty,

Dnμ1lognσlogn/μ3N(0,1)\frac{D_n - \mu^{-1} \log n}{\sigma \sqrt{\log n / \mu^3}} \to \mathcal{N}(0,1)

This law of large numbers and central limit theorem explicitly quantifies the typical depth scaling and its concentration for the entire class; μ1\mu^{-1} thus calibrates the rate at which the tree deepens as it grows.

3. Maximum and Minimum Depth: Large Deviations and Tail Scaling

The extremal depth properties (height HnH_n and minimum late-node depth MnM_n) are governed by large deviation principles involving the Legendre–Fenchel transform Λ\Lambda^* of the log-moment generating function Λ(λ)=logE[Xλ]\Lambda(\lambda) = \log \mathbb{E}[X^\lambda]: Λ(z)=supλ{λzΛ(λ)}\Lambda^*(z) = \sup_{\lambda} \{ \lambda z - \Lambda(\lambda) \} Define the rate function Ψ(c)=cΛ(1/c)\Psi(c) = c \Lambda^*(-1/c).

The height admits the asymptotic: HnαmaxlognH_n \sim \alpha_{\max} \log n with αmax=inf{c>1/μ:Ψ(c)>1}\alpha_{\max} = \inf\{ c > 1/\mu: \Psi(c) > 1 \}. The minimum depth among late nodes is: MnαminlognM_n \sim \alpha_{\min} \log n with

αmin={0if {c[0,1/μ):Ψ(c)>1}= sup{c[0,1/μ):Ψ(c)>1}otherwise\alpha_{\min} = \begin{cases} 0 & \text{if } \{ c \in [0,1/\mu): \Psi(c) > 1 \} = \emptyset \ \sup \{ c \in [0,1/\mu): \Psi(c) > 1 \} & \text{otherwise} \end{cases}

These formulas capture the rare-event behavior for "long" and "short" branches, respectively.

For the URRT case (XUniform[0,1)X \sim \operatorname{Uniform}[0,1)), μ=1\mu = 1, and computation yields αmax=e\alpha_{\max} = e, establishing HnelognH_n \sim e \log n independently of branching random walk theory.

4. Generalizations to Power-of-Choice and Greedy Models

SARRT analysis extends to biased attachment kernels, e.g., choosing X=max{U1,,Uk}X = \max\{ U_1,\ldots,U_k \} for i.i.d. uniform UiU_i. Then μ=E[logmax{U1,,Uk}]\mu = \mathbb{E}[-\log \max\{U_1,\ldots,U_k\}], and scaling for Dn,Hn,MnD_n, H_n, M_n proceeds via identical renewal and large deviation arguments. Greedy DAG variants or kk-dags' typical and maximum depths are thus captured in the same formalism.

5. Asymptotic Expressions and Universal Scaling Laws

Summarizing, | Statistic | Formula | Scaling Constant | |-------------------|-------------------------------------------------------|------------------------------------| | Typical Depth | Dn(1/μ)lognD_n \sim (1/\mu) \log n | μ=E[logX]\mu = \mathbb{E}[-\log X] | | Height | HnαmaxlognH_n \sim \alpha_{\max} \log n | αmax=inf{c>1/μ:Ψ(c)>1}\alpha_{\max} = \inf \{ c > 1/\mu : \Psi(c) > 1 \} | | Min Late Depth | MnαminlognM_n \sim \alpha_{\min} \log n | αmin=sup{c[0,1/μ):Ψ(c)>1}\alpha_{\min} = \sup \{ c \in [0,1/\mu): \Psi(c) > 1 \} or $0$ |

All constants depend only on the law of XX and are computable via integrals and rate function minimizations.

6. Connections to Continuous-Time Branching Processes and Exploration Algorithms

Recent work positions SARRTs at the interface of continuous time branching process theory. For instance, in network evolution models with limited memory (Angel et al., 21 Oct 2025), SARRTs with kernel θ(0,1)\theta \in (0,1) correspond to recursive trees where each new vertex only attaches to later vertices, interpreted as the scaling parameter. The limiting local structure is expressed as a sin-tree generated by a continuous time branching process stopped at an exponential time.

Exploration algorithms developed for tracking the ancestral paths of youngest vertices reveal the relation between global height and local fringe distributions, and describe phase transitions in the geometry of the tree (e.g., polynomial versus logarithmic height asymptotics).

7. Tree Limits and Macroscopic Geometry

The limit theory for random trees ("long dendron" convergence) applies to SARRTs whenever the typical distance between random vertices, rescaled by 1/logn1/\log n, converges in probability to a constant $2a > 0$ (Janson, 2020). The global metric structure of large SARRTs thus reduces to a degenerate metric space where typical distances concentrate sharply.

8. Statistical Mechanics Connections: Broadcasting, Percolation, and Coalescents

SARRTs serve as a substrate for stochastic processes such as information broadcasting, percolation, and coalescence. For instance, depth-dependent broadcasting or two-colouring dynamics can be analyzed directly through SARRT scaling, with limiting distributions for monochromatic cluster sizes available via Pólya urn methods or analytic combinatorics (Desmarais et al., 2021). Coalescent processes induced by operations such as tree "lifting" can also, in principle, be studied in SARRTs, predicting genealogical partition dynamics via multiple-merger coalescents with attachment-parameter-dependent rate measures (Pitters, 2016).

9. Martingale Methods and Random Recursive Metric Spaces

Generalizations to random recursive metric spaces reveal that SARRTs are instances where each "block" is an edge with an attachment probability determined by a scaling kernel (Desmarais, 2022). The insertion depth (distance from root to newly-inserted vertex) admits a martingale central limit theorem with explicit scaling: AnE[WA]E[W]lnnA_n \sim \frac{\mathbb{E}[W A']}{\mathbb{E}[W]} \ln n when the attachment kernel is parametrized by weight variables WW.

10. Summary and Broader Implications

SARRTs constitute a unifying probabilistic model for recursive network growth, interpolating between uniform and preferential attachment via a scaling kernel. The renewal-theoretic analysis yields explicit asymptotic laws for the typical, maximum, and minimum depths, governed by the mean increment μ\mu and large deviation constants αmax,αmin\alpha_{\max}, \alpha_{\min} tied to the rate function Ψ(c)\Psi(c). These results not only provide elementary proofs for classical recursive tree statistics but also extend to diverse greedy and power-of-choice models, network exploration algorithms, and coalescent processes.

The deep connections to renewal theory, large deviation techniques, continuum tree limits, and martingale methods position SARRTs as a flexible framework for analyzing random recursive structures in combinatorics, probability, and statistical network science. The universality of the logarithmic depth law, the possibility of controlling tail behavior via kernel adjustments, and the extension to random measure trees and higher-dimensional metric spaces all point to the enduring significance of the scaled-attachment paradigm.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Scaled-Attachment Random Recursive Trees (SARRTs).