Papers
Topics
Authors
Recent
Search
2000 character limit reached

Contrastive Concept-Tree Search (CCTS)

Updated 10 February 2026
  • The paper introduces CCTS, a framework that leverages a contrastive probabilistic model to select candidate programs based on a learned semantic hierarchy.
  • CCTS constructs an interpretable concept tree to dynamically guide parent selection, balancing exploration and exploitation for improved fitness outcomes.
  • Empirical results show that CCTS outperforms fitness-only methods on tasks like circle packing by reducing variance and accelerating convergence.

Contrastive Concept-Tree Search (CCTS) is a methodological framework for improving LLM-assisted algorithm discovery by explicitly leveraging a learned hierarchy of semantic concepts. CCTS formulates discovery as an iterative search where the LLM proposes candidate programs, an external evaluator provides fitness feedback, and, crucially, the semantic concepts underlying each candidate are extracted to organize search in a concept space. This approach employs a contrastive probabilistic model over the extracted concept vectors to guide selection of historical solutions, actively reweighting parent selection toward semantically promising and away from misleading concept combinations and thus refining the search through an interpretable concept hierarchy, rather than relying solely on program lineage or fitness (Leleu et al., 3 Feb 2026).

1. Iterative Search Framework and Archive Structure

CCTS operates as an evolutionary optimization loop tailored for program synthesis. At iteration tt, the algorithm maintains an archive

At={(xi,yi,bi)}A_t = \{ (x_i, y_i, b_i) \}

where xix_i is a previously generated candidate program, yi=E(xi;C)y_i = E(x_i; C) is its task-specific fitness as determined by an external evaluator EE, and bi=Φ(xi)b_i = \Phi(x_i) is the program’s binary semantic concept feature vector obtained via an LLM extraction function Φ\Phi.

The iterative process comprises the following core steps:

  • Parent Selection: Parents StAtS_t \subseteq A_t are chosen from the archive according to a sampling policy πt\pi_t, with CCTS using learned contrastive weights over concept representations.
  • Prompt Construction: The LLM prompt Pt=P0(C)CtxtP_t = P_0(C) \oplus \text{Ctx}_t composes the fixed task description P0P_0 and a concise LLM-generated context summarizing recent candidates.
  • Child Generation: The LLM L\mathcal{L} generates a new candidate xt+1qL(Pt,St)x_{t+1} \sim q_{\mathcal{L}}(\cdot|P_t, S_t) conditional on the current prompt and selected parents.
  • Evaluation and Archive Update: The external evaluator computes yt+1y_{t+1}, Φ\Phi extracts the activated concept vector bt+1b_{t+1}, and these are appended to At+1A_{t+1}.

This framework enables iterative refinement of candidate solutions, supporting dynamic incorporation of new concepts as the search progresses.

2. Extraction and Organization of Hierarchical Concept Representations

Semantic abstraction in CCTS centers on the dynamic construction of a concept tree VV, with nodes representing program concepts (e.g., "circle center spacing", "boundary reflection"). Each concept vVv \in V possesses a unique parent pa(v)\text{pa}(v), forming a rooted hierarchy that encodes semantic specialization.

Concept Extraction: Following each program generation, the LLM is directly prompted: “List the semantic concepts or heuristics used by this program.” Extracted items are represented as a binary vector b{0,1}Vb \in \{0,1\}^{|V|}, indicating active concepts, subject to the ancestor-closure constraint (bv=1    bpa(v)=1b_v=1\implies b_{\text{pa}(v)}=1).

Tree Growth: When Φ\Phi discovers a previously unseen concept, a new leaf node is appended to VV and attached to its most suitable parent, as determined by the LLM’s explanation. Empirically, VV evolves from a small set of general concepts to a broader hierarchy of specialized nodes, facilitating fine-grained semantic tracking of candidate programs.

3. Contrastive Probabilistic Modeling of Concepts

CCTS partitions the archive into "good" and "bad" subsets via a dynamic fitness threshold τt\tau_t, frequently set such that half the archive is considered high-performing. Two class-conditional probabilistic models are fit over the observed concept vectors:

p^η+(b)p(bgoodt),p^η(b)p(bbadt)\hat{p}_{\eta^+}(b) \approx p(b|\text{good}_t), \quad \hat{p}_{\eta^-}(b) \approx p(b|\text{bad}_t)

Maximum-likelihood estimation (cross-entropy) determines Bernoulli parameters for each node vv:

  • ηv+=E[bi,vigoodt,bi,pa(v)=1]\eta_v^+ = \mathbb{E}[b_{i,v} | i \in \text{good}_t, b_{i,pa(v)} = 1]
  • ηv=E[bi,vibadt,bi,pa(v)=1]\eta_v^- = \mathbb{E}[b_{i,v} | i \in \text{bad}_t, b_{i,pa(v)} = 1]

The joint over any valid binary vector factors as

p^η(b)=vrootηvbv(1ηv)(bpa(v)bv)\hat{p}_\eta(b) = \prod_{v \neq \text{root}} \eta_v^{b_v} (1-\eta_v)^{(b_{pa(v)} - b_v)}

A contrastive likelihood-ratio weight is then defined for each observed bb:

w(b)=p^η+(b)p^η(b)w(b) = \frac{\hat{p}_{\eta^+}(b)}{\hat{p}_{\eta^-}(b)}

This ratio quantifies, in a statistically grounded manner, which concept combinations are disproportionately frequent among high-performing solutions, enabling targeted biasing of parent selection.

4. Guided Search and Semantic Edits via the Concept Tree

As new candidates are generated and annotated, CCTS grows VV and updates contrastive model parameters (η+\eta^+, η\eta^-) online. Parent sampling proceeds in two modes:

  • Exploit (probability pexploit0.85p_{exploit} \approx 0.85): Archive entries are sampled with probability proportional to w(bi)w(b_i).
  • Explore: Uniform sampling regardless of w(bi)w(b_i) encourages diversity.

Additionally, at each iteration, a "semantic edit" can be performed by injecting a directive into the LLM prompt to emphasize or avoid a particular conceptual motif. The concept selected for editing is drawn from nodes with the highest contrast (largest logηv+logηv\log \eta_v^+ - \log \eta_v^-) and a novelty bonus for underexplored leaves, which increases the likelihood of discovering semantically novel algorithmic heuristics.

5. Empirical Evaluation Against Fitness-Only Baselines

CCTS is benchmarked against several parent selection baselines:

  • Uniform: Random parent selection.
  • Greedy: Top-1 by fitness.
  • k-Elites: Uniform selection from the top-kk (k=5k=5) candidates by fitness.

Empirical results on open Erdős-type combinatorics problems (circle packing, arithmetic Kakeya, Heilbronn’s triangle, squares-in-square) demonstrate that CCTS:

  • Achieves higher mean and median fitness faster (e.g., for circle packing, attains the known optimum in ∼20 iterations, compared to more than 30 for k-Elites).
  • Exhibits narrower performance variance, reducing the frequency and severity of failure modes.
  • Ablation studies confirm that both contrastive weighting and semantic exploration independently contribute to these search improvements.

Fitness-only selection is shown to amplify spurious correlations—concepts favored by the LLM but not causally linked to true task progress—whereas CCTS differentially suppresses misleading concepts through contrastive estimation, resulting in more robust exploration of solution space (Leleu et al., 3 Feb 2026).

6. Experiments in Controlled Synthetic Environments

To further probe CCTS, experiments were conducted in a synthetic setting where the LLM is replaced with a generative "teacher" branching-process concept tree populated according to explicit generative parameters (branching rate λ0\lambda_0, maximum depth DmaxD_{max}), and fitness is defined as a weighted sum of concepts with additive noise.

Under this model, candidate mutation is prescribed: active concepts are inherited with probability pkeepp_{keep}, and children acquire new concepts stochastically, either locally or globally. The feature extractor Φ\Phi is subject to controlled noise (depth-dependent false negatives).

CCTS, applied in this setting, demonstrates:

  • Nearly identical qualitative learning curves to LLM-based runs.
  • Strong correlation (r0.85r \approx 0.85) between the learned contrastive signal (logηv+logηv\log \eta_v^+ - \log \eta_v^-) and the true discriminative log-odds assigned by the teacher.
  • The optimal exploitation probability (pexploitp_{exploit}) shifts as the branching factor of the concept tree is varied, suggesting that task-dependent adjustment of exploration-exploitation balance is beneficial.

7. Strengths, Limitations, and Prospective Extensions

Advantages:

  • Leverages the LLM’s semantic prior while mitigating spurious or misleading biases via contrastive statistical modeling.
  • Converts black-box fitness optimization into a process conducted over an explicit, interpretable, and evolving hierarchy of semantic concepts.
  • Delivers improved mean fitness, tighter performance distributions, and interpretable insights into task-specific conceptual strategies.

Limitations:

  • The model is factorized and thus ignores higher-order interactions among concepts and their contextual dependencies.
  • Algorithmic hyperparameters (e.g., pexploitp_{exploit}, novelty bonus rate λ\lambda, prompt engineering) require careful tuning for each task domain.
  • Reliance on the LLM’s conceptual extraction ability can limit overall performance if prompts are suboptimal or concept extraction fails.

Potential Extensions:

  • Augmenting the probabilistic model to capture richer dependencies (e.g., graphical models or neural approaches for concept co-occurrence).
  • Integrating CCTS with multi-parent crossover mechanisms or MAP-Elites style archives to further diversify exploration.
  • End-to-end learning of hierarchical concept embeddings to facilitate smoother navigation and interpolation in concept space.
  • Dynamic adjustment of the exploration/exploitation schedule conditioned on properties of the evolving concept hierarchy (e.g., observed branching rate, maximum depth).

These properties position CCTS as a distinctive approach for LLM-assisted algorithm discovery, emphasizing semantic transparency, principled guidance, and interpretable solution structure (Leleu et al., 3 Feb 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Contrastive Concept-Tree Search (CCTS).