Papers
Topics
Authors
Recent
Search
2000 character limit reached

Sublinear Graph Coloring

Updated 19 January 2026
  • Sublinear graph coloring is an algorithmic framework that colors massive graphs using sublinear time, space, or queries, often with a slight increase in the palette size.
  • It leverages techniques such as palette sparsification, randomized sampling, and probabilistic partitioning to efficiently reduce resource usage in streaming, MPC, and distributed models.
  • Advances in this area offer adaptive schemes for dynamic and adversarial settings, achieving significant memory and time savings while maintaining near-optimal color assignments.

Sublinear graph coloring refers to algorithms and structural results in which graph coloring—assigning colors to vertices so that adjacent vertices differ—can be performed with time, space, or communication complexity strictly sublinear in some key parameter, most often the number of edges mm, the number of vertices nn, or the “natural” resource bounds given by classical offline coloring routines. In particular, sublinear coloring generally targets regimes where, despite the impossibility of storing or touching the entire edge set (as is the case in massive or streaming graphs), it is still possible (sometimes with a mild increase in the number of colors) to produce a proper (or near-proper) coloring by leveraging probabilistic sparsification, bucket decompositions, local or iterative refinement, or information-theoretic reductions.

1. Theoretical Foundations and Definition

Sublinear graph coloring is predicated on the observation that while classical greedy or sequential coloring algorithms require at least linear (in mm or nn) time and space—for example, to achieve the tight (Δ+1)(\Delta+1)-vertex coloring bound—certain randomized or distributed protocols allow coloring (to within arbitrarily small slack in the palette size) using O~(n)\tilde O(n) space or even o(n2)o(n^2) time, provided one accepts slightly more colors or works in specialized computational models (Bera et al., 2018, Assadi et al., 2018, Assadi et al., 24 Feb 2025, Alon et al., 2020).

Key to this paradigm are palette sparsification theorems: given a (Δ+1)(\Delta+1)-coloring task, it is often possible to randomly sample for each vertex a list of O(logn)O(\log n) candidate colors and then, by solving a much sparser conflict (list-coloring) problem, obtain a valid coloring with high probability (Assadi et al., 2018, Assadi et al., 24 Feb 2025, Alon et al., 2020). The minimum list sizes required exhibit sharp phase transitions depending on whether one allows Δ+1\Delta+1 colors (requiring O(logn)O(\log n) samples) or relaxes to (1+ϵ)Δ(1+\epsilon)\Delta colors (for which Oϵ(logn)O_\epsilon(\sqrt{\log n}) samples can suffice).

Well-developed models for sublinear coloring span:

2. Central Algorithms and Palette Sparsification

A pivotal result is the Palette Sparsification Theorem (PST), extended in multiple directions:

  • If G=(V,E)G=(V,E) has maximum degree Δ\Delta, and each vVv\in V independently samples a list L(v)L(v) of O(logn)O(\log n) random colors from {1,,Δ+1}\{1,\dots,\Delta+1\}, then with high probability there exists a valid proper coloring with χ(v)L(v)\chi(v)\in L(v) for all vv (Assadi et al., 2018, Assadi et al., 24 Feb 2025, Alon et al., 2020).
  • For (1+ϵ)Δ(1+\epsilon)\Delta-coloring, only Oϵ(logn)O_{\epsilon}(\sqrt{\log n}) samples per vertex suffice, and this threshold is tight (Alon et al., 2020).

Methodologically, these palettes allow a reduction from coloring GG (with potentially Θ(n2)\Theta(n^2) edges) to coloring a much sparser “conflict graph” GconfG_{\operatorname{conf}}, where edge (u,v)(u,v) exists only if L(u)L(v)L(u)\cap L(v)\neq\emptyset. The coloring is then constructed via a list-coloring on GconfG_{\operatorname{conf}}, solvable efficiently by either greedy or constructive probabilistic algorithms (Assadi et al., 2018, Assadi et al., 24 Feb 2025).

The Asymmetric Palette Sparsification Theorem (APST) further refines sparsification by assigning list sizes adaptively based on a random permutation, allowing average list size O(log2n)O(\log^2 n) while retaining a greedy coloring procedure (Assadi et al., 24 Feb 2025). This algebraically simplifies the search for a valid coloring and reduces resource usage across streaming, sublinear time, and MPC models (Table 1).

Model Space/Queries Palette Size Algorithmic Core
Streaming O(nlog2n)O(n\log^2 n) (Δ+1)(\Delta+1) Palette sparsification
Sublinear time O(n3/2logn)O(n^{3/2}\sqrt{\log n}) (Δ+1)(\Delta+1) Randomized/grover-based
MPC O(nlogn)O(n\log n) per machine (Δ+1)(\Delta+1) O(1)O(1)-round coloring

3. Sublinear Coloring in Algorithmic Models

Streaming and Semi-Streaming

In the streaming/semi-streaming model, sublinear coloring is realized via:

  • One-pass (1+ε)Δ(1+\varepsilon)\Delta-vertex coloring in O(ε1nlogn)O(\varepsilon^{-1} n\log n) space using a two-stage random partitioning scheme that bounds intra-bucket degrees w.h.p, giving (1+ε)Δ(1+\varepsilon)\Delta colors (Bera et al., 2018, Assadi et al., 2018).
  • For graphs of bounded arboricity α\alpha, a O((1/ε)logn)O((1/\varepsilon)\log n)-pass semi-streaming algorithm yields (2+ε)α(2+\varepsilon)\alpha-colorings using O(ε1npolylogn)O(\varepsilon^{-1} n\operatorname{polylog} n) memory (Bera et al., 2018).

Adversarially robust streaming lower bounds show that, in the presence of adaptive adversaries, any semi-streaming algorithm using O(npolylogn)O(n\operatorname{polylog} n) memory must use Ω(Δ2)\Omega(\Delta^2) colors—even as randomized non-robust algorithms achieve (Δ+1)(\Delta + 1)-colorings in the same space (Chakrabarti et al., 2021).

Sublinear Time Query, MPC, and Distributed Settings

In the query model, sublinear (Δ+1)(\Delta+1)-coloring is achieved in O(n3/2logn)O(n^{3/2}\sqrt{\log n}) time; tighter bounds or quantum speedups further reduce this to O~(n4/3)\tilde O(n^{4/3}) quantum queries (Ferber et al., 9 Feb 2025, Alon et al., 2020). In the MPC setting, O(1)O(1) round coloring with O(nlogn)O(n\log n) memory per machine is possible for (Δ+1)(\Delta+1) colors (Assadi et al., 2018, Assadi et al., 24 Feb 2025).

In distributed environments, locally-iterative coloring algorithms first achieved O(Δ)O(\Delta)-round complexity, but new methods break this to O(Δ3/4logΔ)+lognO(\Delta^{3/4}\log\Delta)+\log^* n rounds, using defective and arbdefective intermediate colorings to collapse the palette quadratically at each step (2207.14458). For deterministic Δ\Delta-coloring, recent results produce a O(logn)O(\log n)-round algorithm for dense constant-degree graphs, matching the theoretical lower bound, through a sequence of reductions on almost-clique decompositions and “slack triad” construction (Jakob et al., 3 Apr 2025).

4. Degeneracy, Arboricity, and Limitations

Sublinear coloring performance often improves on graphs of bounded degeneracy κ(G)\kappa(G) or arboricity α(G)\alpha(G). The key tool is to randomly partition GG into low-degeneracy induced subgraphs (low-degeneracy partitions), then blockwise color greedily. Formally, for graphs with κ(G)\kappa(G), coloring within κ(G)+o(κ(G))\kappa(G)+o(\kappa(G)) colors is possible with O(npolylogn)O(n\operatorname{polylog} n) memory or O(n3/2)O(n^{3/2}) queries; but achieving the optimal κ(G)+1\kappa(G)+1 coloring in streaming or query models always requires Ω(n2)\Omega(n^2) resources (Bera et al., 2019).

This establishes a rigorous dichotomy: truly sublinear algorithms must pay a small additive or multiplicative penalty in palette size, dictated by probabilistic concentration and information-theoretic constructions.

5. Advances in Applications and Heuristics

Palette-based iterative procedures, such as the Picasso algorithm, demonstrate sublinear-space coloring in massive, practical settings. By iteratively assigning small random lists and coloring conflict graphs that are O(nlog3n)O(n\log^3 n) sparse, Picasso achieves up to 68×68\times memory savings over existing approaches while still remaining within 5%5\% of state-of-the-art color counts. Machine learning is used to tune tradeoff parameters for memory and color efficiency, and implementation on GPUs enables coloring on dense graphs with up to a trillion edges in under 15 minutes (Ferdous et al., 2024).

In quantum computing, these memory-efficient routines have direct application to Pauli string partitioning problems.

6. Lower Bounds, Robustness, and Open Problems

Lower bounds highlight that adversarially robust algorithms, or those required to always succeed against adaptive adversaries, face steep trade-offs; achieving O(Δ)O(\Delta) colors in semi-streaming space is provably impossible—at least Ω(Δ2)\Omega(\Delta^2) colors are needed in this regime (Chakrabarti et al., 2021). In the dynamic graph model, similar sublinear bounds are achieved for amortized recoloring time under edge updates even against adaptive adversaries, though the update time is currently at O~(n8/9)\widetilde O(n^{8/9}) rather than polylogarithmic (Benson-Tilsen, 12 Jan 2026).

Further, tight lower bounds are established for streaming and query complexity: for instance, any (κ(G)+1)(\kappa(G)+1)-coloring in one-pass streaming or one-query model requires Ω(n2)\Omega(n^2) space or queries (Bera et al., 2019).

Open questions include whether deterministic or robust sublinear coloring with palette size O((1+ϵ)Δ)O((1+\epsilon)\Delta) is feasible, whether palette sample sizes or post-processing step complexities can be further reduced, whether quantum models can break classical bounds, and whether full deterministic low-round distributed coloring for general graphs can be realized (Alon et al., 2020, Assadi et al., 24 Feb 2025, Ferber et al., 9 Feb 2025, Jakob et al., 3 Apr 2025, Benson-Tilsen, 12 Jan 2026).

7. Extensions and Structural Variants

The sublinear coloring paradigm extends to specialized graph classes (e.g., triangle-free, bounded-arboricity, degeneracy), and to edge-coloring and defective/arbdefective coloring variants (Alon et al., 2020, 2207.14458). The line of research also impacts structural combinatorics: for instance, sublinear approximations in 5-edge-coloring are shown to be equivalent to resolving the full Petersen coloring conjecture (Mattiolo et al., 2021).

A distinguishing pattern is that the success of sublinear coloring is tied closely to the efficacy of palette sparsification lemmas and probabilistic partitioning schemes, suggesting broader impact across list-coloring, local symmetry breaking, and approximate decompositions in massive graph models.


References

(Bera et al., 2018, Assadi et al., 2018, Assadi et al., 24 Feb 2025, Alon et al., 2020, Chakrabarti et al., 2021, Ferber et al., 9 Feb 2025, Bera et al., 2019, Benson-Tilsen, 12 Jan 2026, Ferdous et al., 2024, 2207.14458, Jakob et al., 3 Apr 2025, Mattiolo et al., 2021).

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Sublinear Graph Coloring.