Every Call is Precious (ECP) Framework
- The Every Call is Precious (ECP) framework is a principled method for black-box optimization that defines an acceptance region based on Lipschitz continuity to ensure every evaluation is informative.
- It employs an adaptive threshold mechanism that balances exploration and exploitation without requiring explicit estimation of the Lipschitz constant.
- ECPv2 enhances the original by integrating adaptive lower bounds, worst-m memory, and random projections, reducing computational cost and improving performance in high-dimensional settings.
The Every Call is Precious (ECP) framework is a theoretically principled and practically effective family of global optimization algorithms for black-box, nonconvex, Lipschitz-continuous functions with unknown Lipschitz constants. ECP and its scalable extension, ECPv2, pursue a "precious evaluation" philosophy, ensuring that each function query is potentially informative with respect to the global optimum. This approach systematically excludes wasteful or provably suboptimal evaluations, offering no-regret and minimax-optimal finite-time guarantees without requiring explicit estimation of the Lipschitz constant (Fourati et al., 6 Feb 2025, Fourati et al., 20 Nov 2025). The ECP paradigm has demonstrated robustness and competitive performance on high-dimensional synthetic and real-world benchmarks.
1. Problem Setting and Acceptance Principle
ECP targets black-box global maximization: where is unknown but assumed -Lipschitz for some unknown : At each step, the optimizer maintains an archive . Rather than evaluating candidates sampled uniformly at random, ECP introduces an "acceptance region"—the set of points that could plausibly be maximizers for some Lipschitz extension of the observed data at the current surrogate Lipschitz constant . Formally: with .
This acceptance rule is nonparametric and adapts as additional points are evaluated. Intuitively, yields the best upper bound on consistent with a -Lipschitz function. By evaluating only if the best-case estimate exceeds the current archive maximum, ECP ensures that every accepted call is "precious" (Fourati et al., 6 Feb 2025).
2. Algorithmic Workflow and Adaptation Mechanism
ECP employs a multi-scale exploration strategy where the threshold increases adaptively. The algorithm proceeds as follows:
- Initialization: Draw the first sample , evaluate , and set initial , typically .
- Sampling: At iteration , repeatedly sample and check for membership in .
- Acceptance and Archive Update: Upon acceptance, evaluate , add to the archive, and multiply by .
- Patience Control: Maintain a "rejection counter": after consecutive rejections, increase by .
This protocol fosters a careful balance: initially, the small yields tight regions and exploitation; as increases, the acceptance region expands, promoting exploration. The approach provably avoids indefinite rejection loops and systematically relaxes acceptance as the evaluation budget progresses (Fourati et al., 6 Feb 2025).
3. Theoretical Properties and Guarantees
ECP provides the following guarantees under -Lipschitz continuity:
- Monotonic Acceptance Region: For , . The region expands as increases.
- Potential Optimality: When , , the set of potential global maximizers under some completion.
- Finite-Time and Asymptotic Regret Bounds: For any , with probability :
with , which matches the minimax lower bound for global optimization under Lipschitz continuity (Fourati et al., 6 Feb 2025).
- No-regret: As , in probability for every -Lipschitz .
These properties ensure strong theoretical robustness and confirm the "preciousness" principle: every function call advances, in a minimax-optimal sense, the global search.
4. Computational Challenges and ECPv2 Extensions
Original ECP's computational bottlenecks and conservative early-phase rejection rates are addressed in ECPv2 through three principal mechanisms (Fourati et al., 20 Nov 2025):
- Adaptive Lower Bound on : At each ,
and . This prevents vacuous acceptance regions and ensures that is nonempty.
- Worst- Memory Mechanism: Only the worst points (indexed by lowest ) are used in the acceptance test:
Reduces per-iteration cost from to without sacrificing theoretical guarantees.
- Fixed Random Projection: Distances are computed in a reduced dimension via a Gaussian random matrix. With high probability ():
for all . Scale to . Computational cost per query is further reduced to .
Acceptance Region Inclusion: With probability , , ensuring ECPv2 never rejects a point accepted by ECP.
Theoretical analysis confirms that ECPv2 preserves no-regret guarantees and optimal finite-time regret rates. Each innovation is validated through ablation and empirical studies (Fourati et al., 20 Nov 2025).
5. Practical Implementation and Complexity
The core computational steps for both ECP and ECPv2 are as follows:
- Per Candidate: Calculate acceptance region membership by minimizing a surrogate upper bound over a subset (all, or worst-) of previous points.
- Per Iteration (ECPv2):
1. Project candidate and archive to . 2. For each candidate, compute . 3. Accept or reject, updating the archive and predictor variables accordingly.
Memory cost is ; distance computations scale as , with .
ECPv2 pseudocode is as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
Construct random projection P of size d' Sample X1 ~ U(X); observe f(X1) t = 1; eps = eps1; h_reject = 0 while t < n: Propose Y ~ U(X); hatY = P Y if h_reject >= C: eps = tau_nd * eps h_reject = 0 eps_oslash = (f_max^t - f_min^t) / diam(X) eps = max(eps, eps_oslash) Identify worst-m: I_t^m if min_{i in I_t^m} [f(X_i) + eps/sqrt(1-delta) * ||P Y - P X_i||] >= f_max^t: Evaluate f(Y), append, t += 1, h_reject=0 else: h_reject +=1 Return argmax_{i<=n} f(X_i) |
6. Experimental Benchmarks and Performance
Comprehensive benchmark studies compare ECP/ECPv2 to established methods including AdaLIPO, DIRECT, SMAC3, Dual-Annealing, CMA-ES, and Bayesian techniques. Key settings include:
- Benchmarks: High-dimensional synthetic (Rosenbrock , Powell , etc.) and low-dimensional testbeds.
- Metrics: Simple regret versus evaluation budget, wall-clock time.
- Default hyperparameters: .
Results confirm that ECPv2 matches or outperforms all competing methods in final regret, with particularly notable acceleration on high-dimensional tasks (up to faster than ECP in wall-clock time, achieving equal or better regret). Ablation experiments demonstrate the independent and combined impacts of lower bounding, worst-, and projection mechanisms in reducing computational burden (Fourati et al., 20 Nov 2025).
A summary of empirical findings is provided below:
| Method | Benchmark Coverage | Regret Performance | Wall-clock Speed |
|---|---|---|---|
| ECPv2 | Broad (2–1000D) | Optimal/near-optimal | Best/far-above ECP |
| ECP | Broad (2–1000D) | Optimal/near-optimal | Slower than ECPv2 |
| SOTA others | Broad (varied) | Sometimes close | Variable |
On Rosenbrock, ECPv2 achieves optimal regret in roughly half the wall-clock time required by ECP, with other optimizers typically slower and/or less robust for unknown Lipschitz constants (Fourati et al., 20 Nov 2025).
7. Limitations, Extensions, and Applications
ECP's foundational assumption is global Lipschitz continuity of the objective. If is highly non-Lipschitz or exhibits severe local ruggedness, the core acceptance-rejection logic may become less effective. In extremely high dimensions or for very large evaluation budgets, surrogate-based Bayesian optimization or evolutionary algorithms may achieve superior sample efficiency, although ECPv2 narrows this gap through algorithmic acceleration.
Extensions proposed include:
- Alternative metrics beyond Euclidean for structured domains.
- Integrating lightweight surrogate models once substantially exceeds true .
- Online adaptation of growth rule and patience parameter .
- Plug-and-play integration in black-box optimization pipelines.
ECP code is available at https://github.com/fouratifares/ECP (Fourati et al., 6 Feb 2025).
ECP and ECPv2 offer minimax-optimal global optimization for Lipschitz-continuous functions with unknown smoothness, distinguished by their rigorous acceptance rule, adaptive behavior, and scalable implementation (Fourati et al., 6 Feb 2025, Fourati et al., 20 Nov 2025).