Papers
Topics
Authors
Recent
2000 character limit reached

Chicken Swarm Optimization (CSO)

Updated 22 November 2025
  • Chicken Swarm Optimization (CSO) is a population-based meta-heuristic algorithm that utilizes hierarchical roles—roosters, hens, and chicks—to balance exploration and exploitation.
  • Algorithmic enhancements, such as dual influence on chicks, address premature convergence and improve document image enhancement metrics like MSE and PSNR.
  • Adaptive role reassignments and specific parameter settings yield faster convergence and lower computational costs compared to methods like CS, FA, and ABC.

Chicken Swarm Optimization (CSO) is a population-based meta-heuristic algorithm modeled on the hierarchical and social foraging behavior of chicken flocks. Originating with Meng et al., CSO partitions agents ("chickens") into functionally distinct roles—roosters, hens, and chicks—with explicit fitness-based hierarchy and role-structured information flow. Recent research, notably in the context of handwritten document image enhancement, has focused on algorithmic enhancements to address stagnation and improve exploitation-exploration trade-offs via bi-criteria optimization (Mugisha et al., 20 Oct 2024).

1. Algorithmic Structure of Chicken Swarm Optimization

The fundamental mechanism of CSO is built around dynamic role assignment and update rules that reflect the social organization of a chicken flock. The algorithm operates over a population of NN candidates within a search space of dimension DD, iteratively updating candidate positions according to hierarchical role dynamics.

Every GG generations, the entire population is sorted by fitness (fif_i), and roles are reassigned as follows:

  • Roosters: A fraction RNR_N of highest-fitness individuals. Each rooster ii executes a position update via a broad, Gaussian-guided search:

xi,jt+1=xi,jt+Randn(0,σi2)(xi,jtxk,jt)x_{i,j}^{t+1} = x_{i,j}^t + \mathrm{Randn}(0, \sigma_i^2)\,(x_{i,j}^t - x_{k,j}^t)

where kk is a randomly selected distinct rooster index, and σi2\sigma_i^2 is variance (constant or fitness-difference based).

  • Hens: A majority fraction HNH_N, who update by attraction toward both the best rooster and a random peer:

xit+1=xit+S1rand()(xr1txit)+S2rand()(xr2txit)x_i^{t+1} = x_i^t + S_1\,\mathrm{rand}()\,(x_{r_1}^t - x_i^t) + S_2\,\mathrm{rand}()\,(x_{r_2}^t - x_i^t)

with S1,S2S_1, S_2 as exponential functions of fitness differences.

  • Chicks: Remaining individuals, each assigned a mother hen m(i)m(i). The original chick update is a simple follow:

xit+1=xit+FL(xm(i)txit)x_{i}^{t+1} = x_i^t + F_L\,(x_{m(i)}^t - x_i^t)

where FLF_L is a moderate follow factor sampled from a fixed range.

This role-based dynamic is designed to balance exploitation (within-group copying) and exploration (rooster-wide variation), with periodic hierarchy reassignment to ensure adaptability (Mugisha et al., 20 Oct 2024).

2. Algorithmic Enhancements for Handwritten Document Image Enhancement

A central issue identified in the original CSO is the risk of premature convergence: in particular, chicks learning solely from their mother hen can lead the swarm to become locally trapped. The improved CSO (ICSO) introduces a dual-influence chick update, enabling chicks to be influenced both by their mother and the group's rooster:

xit+1=xit+F[srand()(xm(i)txit)+(1s)rand()(xr(i)txit)]x_i^{t+1} = x_i^t + F\left[s\,\mathrm{rand}()\,(x_{m(i)}^t - x_i^t) + (1-s)\,\mathrm{rand}()\,(x_{r(i)}^t - x_i^t)\right]

where FF is a global learning factor, s(t)s(t) is an adaptive "self-learning" coefficient, and r(i)r(i) is the rooster index in the group.

The coefficient ss is set to decay exponentially over time:

s(t)=smin(smaxsmin)11+10t/Tmaxs(t) = s_{\min}\left(\frac{s_{\max}}{s_{\min}}\right)^{\frac{1}{1+10\,t/T_{\max}}}

causing chicks to progressively rely more on the rooster and less on their mother as the algorithm proceeds.

The ICSO is applied within a histogram modification framework, optimizing a bi-criteria objective:

J(x)=xh2+λxu2+γDx2J(x) = \|x-h\|^2 + \lambda\|x-u\|^2 + \gamma\|D\,x\|^2

for input histogram hh, reference histogram uu, contrast weight λ\lambda, and detail-preserving weight γ\gamma. After convergence, the solution is normalized and mapped through a standard histogram equalization function (Mugisha et al., 20 Oct 2024).

3. Parameterization and Initialization

Typical ICSO settings for image histogram optimization with D=256D=256 include:

Role/Parameter Value/Range
Population size NN 50–100
Roosters RNR_N 0.05 × N
Hens HNH_N 0.75 × N
Mother hens MNM_N 0.10 × H_N
Chicks CNC_N N − R_N − M_N
Update interval GG 10
Global learning factor FF 0.4
Follow factor FLF_L Uniform in [0, 0.4]
Self-learning smax,smins_{max}, s_{min} 0.9, 0.4
Iterations TmaxT_{max} 1000
Contrast weight λ\lambda 0–20 (image-dependent)
Detail weight γ\gamma 10310^310910^9

Initial candidate histograms are sampled uniformly from [0,1]256[0,1]^{256}, and the bi-criteria objective governs fitness evaluation. Assignment and role reassignment follow fitness-based sorting procedures at every GG generations (Mugisha et al., 20 Oct 2024).

4. Performance on Document Image Enhancement

ICSO was benchmarked against Cuckoo Search (CS), Firefly Algorithm (FA), and Artificial Bee Colony (ABC) on handwritten document images (size 600 × 100 pixels), using mean-square error (MSE), peak signal-to-noise ratio (PSNR), entropy, and variance as quality metrics.

Results averaged across ten runs per method demonstrated:

  • Image 1 (poor quality paper):
    • Lowest MSE (0.98; vs. 1.11–2.06).
    • Highest PSNR (≈24.1 dB).
    • Lowest entropy (0.98 bits).
    • Variance ≈44, reflecting optimal balance.
  • Image 2 (background clutter):
    • MSE minimized (0.49; vs. 0.46–0.70).
    • PSNR ≈25.5 dB.
    • Entropy 0.49 bits.
    • Variance ≈25.4.

ICSO enhancements visually delivered sharper text with fewer artifacts in comparison to CS, FA, or ABC. Convergence was achieved roughly 20% faster, attributed to the added cross-group learning of chicks (Mugisha et al., 20 Oct 2024).

5. Convergence Behavior and Computational Complexity

The introduction of dual mother-rooster influence (and its adaptive scheduling) in ICSO maintains search diversity for more generations, reducing the risk of swarm stagnation in local minima. Empirically, ICSO converged to stable solutions in fewer than 0.2 × TmaxT_{max} iterations, outperforming the original CSO, which more readily stalled.

Per iteration, computational cost is O(ND)O(ND) for position updates and fitness evaluations. Every GG steps, an additional O(NlogN)O(N\log N) sort is performed for role reassignment. The total time complexity is thus:

O(TmaxND+(Tmax/G)NlogN)O(T_{max}\,N\,D + (T_{max}/G)\,N\log N)

On a typical setup (N50N\approx50, D=256D=256, Tmax=1000T_{max}=1000), ICSO completes in under one second in efficient C/MATLAB code (Mugisha et al., 20 Oct 2024).

6. Significance and Comparative Perspective

By integrating bi-criteria histogram optimization with a dynamically improved CSO framework—particularly endowing chicks with alternating mother-rooster influence and adaptive self-learning—the approach achieves demonstrably superior document image enhancement over state-of-the-art meta-heuristic methods. Enhanced convergence rates, lower error metrics (MSE), and improved visual quality establish ICSO as a high-performing variant, especially under challenging conditions of poor paper quality or significant background clutter (Mugisha et al., 20 Oct 2024).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Chicken Swarm Optimization (CSO).