Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sharp Thresholds in Adaptive Random Graph Processes (2207.14469v2)

Published 29 Jul 2022 in math.CO, cs.DM, and math.PR

Abstract: The $\mathcal{D}$-process is a single player game in which the player is initially presented the empty graph on $n$ vertices. In each step, a subset of edges $X$ is independently sampled according to a distribution $\mathcal{D}$. The player then selects one edge $e$ from $X$, and adds $e$ to its current graph. For a fixed monotone increasing graph property $\mathcal{P}$, the objective of the player is to force the graph to satisfy $\mathcal{P}$ in as few steps as possible. Through appropriate choices of $\mathcal{D}$, the $\mathcal{D}$-process generalizes well-studied adaptive random graph processes, such as the Achlioptas process and the semi-random graph process We prove a sufficient condition for the existence of a sharp threshold for $\mathcal{P}$ in the $\mathcal{D}$-process. For the semi-random process, we use this condition to prove the existence of a sharp threshold when $\mathcal{P}$ corresponds to being Hamiltonian or to containing a perfect matching. These are the first results for the semi-random graph process which show the existence of a sharp threshold when $\mathcal{P}$ corresponds to containing a sparse spanning graph. Using a separate analytic argument, we show that each sharp threshold is of the form $C_{\mathcal{P}}n$ for some fixed constant $C_{\mathcal{P}}>0$. This answers two of the open problems proposed by Ben-Eliezer et al. (SODA 2020) in the affirmative. Unlike similar results which establish sharp thresholds for certain distributions and properties, we establish the existence of sharp thresholds without explicitly identifying asymptotically optimal strategies.

Citations (9)

Summary

We haven't generated a summary for this paper yet.