On Fibonacci Ensembles: An Alternative Approach to Ensemble Learning Inspired by the Timeless Architecture of the Golden Ratio
Published 25 Dec 2025 in stat.ML and cs.LG | (2512.22284v1)
Abstract: Nature rarely reveals her secrets bluntly, yet in the Fibonacci sequence she grants us a glimpse of her quiet architecture of growth, harmony, and recursive stability \citep{Koshy2001Fibonacci, Livio2002GoldenRatio}. From spiral galaxies to the unfolding of leaves, this humble sequence reflects a universal grammar of balance. In this work, we introduce \emph{Fibonacci Ensembles}, a mathematically principled yet philosophically inspired framework for ensemble learning that complements and extends classical aggregation schemes such as bagging, boosting, and random forests \citep{Breiman1996Bagging, Breiman2001RandomForests, Friedman2001GBM, Zhou2012Ensemble, HastieTibshiraniFriedman2009ESL}. Two intertwined formulations unfold: (1) the use of normalized Fibonacci weights -- tempered through orthogonalization and Rao--Blackwell optimization -- to achieve systematic variance reduction among base learners, and (2) a second-order recursive ensemble dynamic that mirrors the Fibonacci flow itself, enriching representational depth beyond classical boosting. The resulting methodology is at once rigorous and poetic: a reminder that learning systems flourish when guided by the same intrinsic harmonies that shape the natural world. Through controlled one-dimensional regression experiments using both random Fourier feature ensembles \citep{RahimiRecht2007RFF} and polynomial ensembles, we exhibit regimes in which Fibonacci weighting matches or improves upon uniform averaging and interacts in a principled way with orthogonal Rao--Blackwellization. These findings suggest that Fibonacci ensembles form a natural and interpretable design point within the broader theory of ensemble learning.
The paper proposes a novel weighting scheme using normalized Fibonacci numbers that reduces variance compared to uniform averaging.
It introduces a recursive ensemble flow with second-order dynamics, embedding memory to enhance the expressive power of aggregated models.
Empirical results show that Fibonacci ensembles balance nonlinear target fitting and variance management effectively, especially for random Fourier regressors.
Fibonacci Ensembles: A Structural Framework for Harmonious Ensemble Learning
Introduction
The paper "On Fibonacci Ensembles: An Alternative Approach to Ensemble Learning Inspired by the Timeless Architecture of the Golden Ratio" (2512.22284) advances ensemble learning by introducing the Fibonacci Ensemble paradigm, in which aggregation weights for base predictors are determined by the (normalized) Fibonacci sequence and its attendant golden-ratio asymptotics. The authors situate their methodology as a rigorous and geometrically motivated alternative to canonical schemes such as bagging, boosting, and random forests, embedding it within a unifying theory of general ensemble weighting. The study presents a detailed analysis of the variance-reduction mechanisms, recursive ensemble dynamics, and the expressive conic hull generated by Fibonacci weighting, supported by both theoretical results and empirical evidence from smooth regression tasks.
Formalization of Fibonacci-Weighted Ensembles
The framework employs the classical ensemble model setting, with base predictors h1,…,hM in a real Hilbert space. The central construct is the Fibonacci-weighted ensemble, defined by
fFib(x)=m=1∑Mαmhm(x),αm=∑j=1MFjFm,
where Fm denotes the mth Fibonacci number. This weighting induces a geometric weighting profile modulated by the golden ratio φ=21+5, yielding exponentially decaying weights that control the bias-variance tradeoff and privilege lower-index learners without completely discarding higher-complexity components.
The variance profile is analyzed in the orthogonalized regime, where explicit decorrelation of base learners via a Gram-based orthogonalization operator ensures that cross-terms are suppressed. The paper proves that Fibonacci-weighted ensembles achieve variance less than or equal to that of uniform averaging (given monotonic variance ordering among base predictors), with the variance ratio converging to a constant cvar<1 as M→∞.
Expansion Beyond Convex Aggregation: The Fibonacci Conic Hull
The use of Fibonacci weights not only modifies the variance structure but also extends the hypothesis space beyond the convex hull typical of bagging or stacking. The Fibonacci conic hull is defined as
which is strictly larger than the convex hull but properly contained within the span, thus conferring strictly greater expressive power without sacrificing control over effective model complexity.
Recursive Ensemble Dynamics: From First-Order to Second-Order Flows
A principal innovation is the introduction of second-order recursive ensemble flows: Fm=βFm−1+γFm−2+ΔFm−1,m≥3,
where updating proceeds according to a linear recursion structurally analogous to the Fibonacci sequence when (β,γ)=(1,1). This recursion injects "memory" into the ensemble dynamics, expanding representational capacity and governing the ensemble's evolution via the spectral properties of the recursion matrix. The paper characterizes the stability region of this recursion: ensemble trajectories remain bounded if and only if the spectral radius ρ(T)=max(∣λ+∣,∣λ−∣) is <1, with eigenvalues dependent on (β,γ). Near the golden-ratio regime, ensembles exhibit a smooth transition associated with critically damped dynamics.
Rao–Blackwellization and Orthogonalization
Integrating the Rao–Blackwell theorem, the framework advocates orthogonalization followed by aggregation. This two-step process yields minimum-variance estimators in the class of linear combinations, with strict risk dominance over naive weighted averaging, provided the base functions are orthogonal or can be made so. The authors show that, for orthogonal bases and when the signal is low-dimensional, Rao–Blackwellized weighted ensembles attain strictly lower risk than Fibonacci; however, in regimes with overcomplete or moderately correlated features, Fibonacci weighting may offer superior bias-variance tradeoffs.
Empirical Results: Random Fourier and Polynomial Experiments
Controlled one-dimensional regression experiments substantiate the theoretical guarantees. In settings where the base models are random Fourier feature (RFF) regressors, Fibonacci ensembles consistently achieve lower integrated squared error (ISE) than both uniform and orthogonal Rao–Blackwell ensembles, especially for highly nonlinear targets. The golden-ratio-induced weighting emphasizes high-frequency predictors just enough to capture sharp nonlinearities, without overwhelming the ensemble with variance.
Figure 1: Sinusoidal regression with random Fourier features—Fibonacci and other ensembles fit noisy data; the Fibonacci fit tracks the truth more tightly in oscillatory regions.
Figure 2: Sinc regression with random Fourier features—Fibonacci-weighted aggregation balances oscillatory expressiveness and variance stabilization.
In contrast, for polynomial bases (which are highly ordered and nearly orthogonal), the empirical risk is minimized by the optimally weighted, orthogonalized ensemble, as anticipated by theory.
Figure 3: Sinusoidal regression with polynomial ensembles—the orthogonal Rao–Blackwell estimator matches the target closely, outperforming Fibonacci weighting in smooth regimes.
Figure 4: Sinc regression with polynomial ensembles—the variance-optimal (orthogonalized) ensemble achieves the best ISE; Fibonacci weighting exhibits excess oscillation in the polynomial tail.
General Weighting Theory and Structural Implications
Beyond the specific Fibonacci law, the authors generalize the aggregation concept. The ensemble aggregation is reformulated as a distributional operator acting on the index set of learners, subsuming a wide family of weighting laws (e.g., geometric, heavy-tailed, symmetric, and recursive schemes). This "General Weighting Theory" provides a principled framework for selecting aggregation strategies tailored to the bias and variance profiles of the model dictionary.
Within this theory, Fibonacci emerges as the canonical second-order recursive weight, occupying a natural design point for dictionaries with monotonically increasing variance or expressivity. The spectral analysis of the aggregation operator elucidates the role of weighting in bias suppression, variance management, and spectral smoothing.
Theoretical and Practical Implications
Theoretical Implications:
The study provides a unified structural view of ensemble learning, connecting variance reduction, expressive power, spectral stability, and generalization error.
It establishes that variance reduction, often the principal motivation for ensembles, can be surpassed in importance by the structured harmonization and expressive geometry of the aggregate.
The paper proves that, under plausible assumptions, Fibonacci ensembles can outperform both uniform and some data-driven schemes in the presence of redundancy and non-orthogonal bases—extending the applicability of ensembles to settings with stable, low-variance base learners such as splines, RKHS regressors, and random features.
Practical Implications:
In machine learning tasks where base models do not form a strongly ordered or strictly orthogonal basis (e.g., RFF, kernel approximations), Fibonacci weighting is a compelling alternative to both uniform and variance-optimal orthogonal weighting.
The methodology is algorithmically simple (weight computation is closed-form, independent of cross-validation or iterative optimization) and naturally extends to recursive settings.
For highly structured or hierarchical model libraries (e.g., tree ensembles, deep neural architectures), Fibonacci-inspired temporal or block weighting may regulate expressivity and suppress overfitting.
Future Directions
The paper outlines clear avenues for further research. Key directions include:
Extension to high-dimensional settings where dictionary geometry and spectral decay are more complex, requiring refined control and adaptive weighting.
Exploration of Fibonacci schemes atop tree-based models (random forests, boosted trees), where weight optimization and orthogonalization are nontrivial.
Integration with data-driven aggregation methods (stacked generalization, super learner) by embedding Fibonacci or other structural priors on the weight profile.
Empirical evaluation across classification tasks, structured data, and deep ensemble models to ascertain the full scope of the proposed methods.
Conclusion
Fibonacci Ensembles, by leveraging the universal architecture of the golden ratio, constitute a principled and rigorous augmentation of ensemble learning theory. By endogenizing recursive structure and harmonized aggregation directly into the weighting law, the method achieves variance stabilization, expressive expansion, and controlled generalization within a single operator framework. This work situates ensemble design within a deeper mathematical and philosophical context, opening new pathways for the principled synthesis of learners in both the stochastic and function space domains.