Papers
Topics
Authors
Recent
Search
2000 character limit reached

Statistical Firefly Algorithm (SFA)

Updated 25 January 2026
  • Statistical Firefly Algorithm (SFA) is an enhanced version of the Firefly Algorithm that integrates statistical hypothesis testing to filter out low-utility moves.
  • It incorporates a data-driven mechanism that reduces the number of computationally expensive objective evaluations while maintaining solution quality.
  • SFA is validated in truss topology optimization, achieving up to 6× fewer function evaluations with comparable or improved convergence rates.

The Statistical Firefly Algorithm (SFA) is an enhancement of the canonical Firefly Algorithm (FA) for global optimization, particularly in computationally intensive contexts such as truss topology optimization. SFA introduces a data-driven hypothesis-testing mechanism to selectively bypass low-utility moves between agents, thus reducing the total number of objective function evaluations while maintaining or improving optimization quality. The algorithm overlays a statistical filtering layer onto the existing FA structure, leveraging the historical efficacy of agent movements to suppress unproductive updates.

1. Conceptual Foundations and Motivation

In standard Firefly Algorithm operations, each agent ("firefly") associated with a solution vector is attracted to and moves toward every other "brighter" (i.e., better) firefly. Each movement triggers a new candidate solution, necessitating costly objective function evaluations, especially in domains such as finite element analysis (FEA)-driven truss design. Not all moves contribute meaningfully to convergence: many candidate updates do not improve the solution quality, resulting in substantial computational waste. The SFA addresses this deficiency by embedding a hypothesis-testing procedure, using the record of past outcomes for each directed pair of fireflies to statistically adjudicate whether a move is “potentially useful.” Moves deemed unlikely to yield improvement are bypassed, which substantially decreases the number of unnecessary objective function calls (Duong et al., 18 Jan 2026).

2. Statistical Hypothesis-Testing Mechanism

For each ordered pair of fireflies (i,j)(i, j), SFA maintains a record:

  • nijn_{ij}: the number of past moves from ii toward jj (initialized at 1).
  • mijm_{ij}: the sample mean of binary success scores (1 if the move improved ii, else 0).
  • sijs_{ij}: the sample standard deviation of those success scores.

Prior to each candidate move, SFA performs a one-tailed statistical hypothesis test:

  • Null hypothesis H0:μijp0H_0: \mu_{ij} \geq p_0
  • Alternative H1:μij<p0H_1: \mu_{ij} < p_0

where p0p_0 is a threshold drawn uniformly from [0,1][0,1] (different per test), and α\alpha is the significance level. The test statistic is

tij=mijp0sij/nijt_{ij} = \frac{m_{ij} - p_0}{s_{ij} / \sqrt{n_{ij}}}

This value is compared to the left-critical value tα,nij1-t_{\alpha, n_{ij} - 1}. If tij<tα,nij1t_{ij} < -t_{\alpha, n_{ij}-1}, the collaboration is deemed ineffective (Pij=0P_{ij}=0), and the move is not attempted. Otherwise, the move proceeds (Pij=1P_{ij}=1). After the move, the outcome is scored and statistics are updated: nijn_{ij} is incremented, mijm_{ij} and sijs_{ij} recomputed, and a new p0p_0 is drawn for the next test (Duong et al., 18 Jan 2026).

3. Integration with the Standard Firefly Algorithm

The underlying mechanics of the FA remain unchanged in SFA except for the motion-filtering step. In FA, each agent ii moves toward each brighter agent jj according to:

β(rij)=β0eγrij2\beta(r_{ij}) = \beta_0 e^{-\gamma r_{ij}^2}

XiXi+β(rij)(XjXi)+αt(rand0.5)X_i \leftarrow X_i + \beta(r_{ij})(X_j - X_i) + \alpha_t(\mathrm{rand} - 0.5)

where rijr_{ij} is the Euclidean distance, β0\beta_0 the initial attractiveness, γ\gamma the absorption coefficient, and αt\alpha_t a decaying random step-size. In SFA, the above update is only executed if Pij=1P_{ij} = 1 (the result of the hypothesis test). If Pij=0P_{ij} = 0, the move and evaluation are skipped. This design introduces negligible overhead, as the hypothesis test is computationally inexpensive relative to the objective function evaluation (Duong et al., 18 Jan 2026).

4. Pseudocode and Parameterization

SFA initialization sets nij1n_{ij} \leftarrow 1, mij1m_{ij} \leftarrow 1, sij0s_{ij} \leftarrow 0, and Pij1P_{ij} \leftarrow 1 for all pairs. Key parameters include:

  • β0\beta_0: initial attractiveness (e.g., 2.0)
  • γ\gamma: light absorption coefficient (e.g., 1.0)
  • ww: step-size cooling factor (e.g., 0.978)
  • α0\alpha_0: initial random step-size, component-wise in [0,xmaxxmin][0, x_{\max} - x_{\min}]
  • α\alpha: significance level of the hypothesis test (commonly 0.05)
  • p0p_0: drawn uniformly for each test
  • npop\mathrm{npop}: population size (recommended 20\geq 20; larger for redundancy)
  • maxIter\mathrm{maxIter}: maximum iterations (problem-dependent, e.g., 1000)

Best practices include adopting a smaller α\alpha for stricter filtering (at the risk of missing improvements), leveraging larger swarms to offset skipped moves, and randomizing p0p_0 to inject diversity in test thresholds (Duong et al., 18 Jan 2026).

Core SFA Workflow (abridged from source):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
Input: npop, maxIter, (β₀, γ, w, α₀), α
Initialize n_{ij}, m_{ij}, s_{ij}, P_{ij}
Randomly initialize X_i and compute f(X_i)
for t = 1 to maxIter:
  for each i, j:
    if f(X_j) < f(X_i) and P_{ij} == 1:
      Compute r_{ij}, β_{ij}
      X_i' ← X_i + β_{ij} rand (X_j – X_i) + α_t (rand–0.5)
      Evaluate f(X_i'); update success statistic
      Update n_{ij}, m_{ij}, s_{ij}
      Draw new p₀, compute t_{ij}, update P_{ij}
      if improved: X_i ← X_i'
  Update global best, decrease α_t
Output: best X

5. Computational Experiments and Performance Results

SFA was validated on benchmark truss topology optimization tasks:

  • 12-node, 39-element 2D truss
  • 10-node, 45-element 2D truss
  • 35-node, 595-element 2D truss (315 variables by symmetry)
  • 18-node, 153-element 3D truss

Metrics were averaged over 1,000 independent runs, including best/average/worst weight, standard deviation, success rate (within 2% of known optimum), function evaluations (FE), and wall-clock time.

Representative results for the 12-node, 39-element 2D truss:

Algorithm Best (lb) Avg (lb) Std (lb) FE evals Time (s)
FA (20) 193.200 214.7 15.5 20,000 32.9
SFA (20) 193.200 213.2 14.6 4,800 7.8
SFA (30) 193.200 214.3 15.2 6,700 14.3
SFA (40) 193.200 213.9 14.8 9,100 22.8

Across all problems, SFA reduced FE by a factor of 2–6 while maintaining or improving quality and convergence rate. All SFA variants attained the known optimum in the highlighted case. Larger swarms compensated for the reduction in move attempts by providing additional search diversity (Duong et al., 18 Jan 2026).

6. Analysis of Algorithmic Benefits and Trade-offs

Empirical results indicate that the lightweight hypothesis-testing overlay in SFA functions effectively as a predictor of future move utility on the basis of historical outcomes. The one-tailed mean test statistically filters out low-value collaborations between firefly pairs, so the most expensive operation (objective evaluation, often an FEA) is avoided whenever the probability of success is statistically insufficient. FA’s fundamental mechanisms—distance-based attraction, random perturbation, and population-based search—remain uninhibited. SFA achieves significant computational savings (up to 6× fewer FEAs) with negligible losses or, in some cases, modest improvements in the quality and robustness of the final designs. Adjustment of α\alpha and population size enables practitioners to balance the competing objectives of evaluation minimization and thorough search coverage (Duong et al., 18 Jan 2026).

7. Implications and Extension Potential

By embedding statistical learning in the core FA loop, SFA empirically demonstrates the value of adaptive, experience-driven strategy selection in high-dimensional, computationally intensive optimization. While the reported results are specific to truss topology design, the only requirement for integration is the cost structure of the objective evaluation and the preservation of the FA update paradigm. This suggests potential applicability in similarly structured engineering design and simulation-based optimization tasks. Further, randomization of the hypothesis threshold (p0p_0) introduces search heterogeneity, possibly mitigating premature convergence and enabling robust search behavior over diverse optimization landscapes (Duong et al., 18 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Statistical Firefly Algorithm (SFA).