Papers
Topics
Authors
Recent
Search
2000 character limit reached

Adaptive Network-Based Fuzzy Inference System

Updated 3 March 2026
  • Adaptive Network-Based Fuzzy Inference System (ANFIS) is a hybrid neuro-fuzzy framework that combines first-order Takagi–Sugeno fuzzy rules with neural network learning, ensuring both data-driven function approximation and interpretability.
  • It utilizes a structured five-layer network with modular fuzzy membership functions and local linear regression, trained via an alternating scheme of least-squares estimation and gradient descent.
  • ANFIS is widely applied in regression, forecasting, and control, providing measurable performance benefits in terms of RMSE, R², and convergence efficiency across diverse engineering and scientific domains.

An Adaptive Network-Based Fuzzy Inference System (ANFIS) is a hybrid neuro-symbolic framework that integrates first-order Takagi–Sugeno fuzzy inference with neural network learning via gradient-based optimization. ANFIS possesses a modular, layered architecture where fuzzy logic membership functions act as trainable “receptive fields” and local linear regression models (rule consequents) form the interpretive core. Classical ANFIS employs a hybrid learning scheme—alternating least-squares estimation for linear outputs and gradient descent for membership-function parameters (the “premise parameters”)—to marry data-driven function approximation with the interpretability and modularity of linguistic fuzzy rules. This framework has proved robust across regression, control, classification, and integration in domain-knowledge hybrid systems.

1. Network Structure and Mathematical Formalism

ANFIS is realized as a five-layer feed-forward network implementing a first-order Sugeno fuzzy inference system:

  1. Layer 1: Input Fuzzification Each crisp input variable xjx_j is mapped to MjM_j fuzzy sets via parameterized membership functions (commonly Gaussian, generalized bell, triangular, or sigmoid).

μAij(xj)=exp[(xjcij)22σij2]orμAij(xj)=11+xjcijaij2bij\mu_{A_{ij}}(x_j) = \exp\Biggl[ -\frac{(x_j - c_{ij})^2}{2 \sigma_{ij}^2} \Biggr] \quad \text{or} \quad \mu_{A_{ij}}(x_j) = \frac{1}{1 + \left| \frac{x_j - c_{ij}}{a_{ij}} \right|^{2b_{ij}} }

  1. Layer 2: Rule Firing Strength Each rule corresponds to a combination of input fuzzy sets, and the firing strength wiw_i is the product of the relevant membership grades:

wi=j=1nμAij(xj)w_i = \prod_{j=1}^n \mu_{A_{ij}}(x_j)

  1. Layer 3: Normalization Normalized firing strengths:

wˉi=wik=1Rwk\bar{w}_i = \frac{w_i}{\sum_{k=1}^R w_k}

  1. Layer 4: Rule Consequent Evaluation Each rule applies a local linear (first-order) output function:

fi(x)=pi,0+j=1npi,jxjf_i(x) = p_{i,0} + \sum_{j=1}^n p_{i,j} x_j

The output before aggregation is wˉifi(x)\bar{w}_i f_i(x).

  1. Layer 5: Defuzzification and Output Aggregation The global ANFIS output is the weighted sum across all rules:

y=i=1Rwˉifi(x)y = \sum_{i=1}^R \bar{w}_i f_i(x)

The total number of rules RR is j=1nMj\prod_{j=1}^n M_j in the full grid-partitioning scheme, leading to combinatorial growth with input dimensionality.

2. Hybrid Learning Mechanism

Classical ANFIS training alternates between two stages per epoch:

  • Forward pass / Least-Squares Estimation: With fixed premise parameters (membership-function centers/widths), the normalized rule strengths are computed for the entire dataset, and the consequence parameters (pi,0,pi,1,p_{i,0}, p_{i,1},\dots) of the local linear models are updated by solving a linear least-squares regression problem across all rules (Chaki et al., 2016, Timur et al., 2021, Pa et al., 2022).
  • Backward pass / Gradient Descent: With fixed consequent parameters, the premise parameters (cijc_{ij}, σij\sigma_{ij} or aija_{ij}, bijb_{ij}) are updated via gradient descent to minimize the squared output error, using backpropagation through the differentiable fuzzy network (Masoumi et al., 13 Jan 2025, Al-Fetyani et al., 2020, Shamshirband et al., 2019).
  • Loss function: Typical cost functions include root mean square error (RMSE) or mean squared error (MSE), with domain-specific extensions in hybrid architectures.

3. Rule Base Construction and Membership Function Selection

  • Rule generation: Most studies use full grid partitioning of the inputs (MnM^n rules for nn inputs and MM MFs per input), but for high-dimensional problems or where sparsity is critical, alternative methods select a subset of the combinatorial rule base (Yong et al., 3 Feb 2026). Rule construction follows first-order Takagi–Sugeno (“IF x1x_1 is Ai1A_{i1} AND … xnx_n is AinA_{in} THEN fi(x)f_i(x)”).
  • Membership function types: Gaussian functions are standard for their smoothness and differentiability, supporting effective gradient-based optimization. Generalized bell, triangular, trapezoidal, and sigmoid functions are also employed, with the performance sensitivity to MF type generally being low when the count and data coverage are sufficient (Shamshirband et al., 2019, Claywell et al., 2020, Habashy et al., 2022).

Table 1: ANFIS Configuration Parameters in Selected Applications

Application Inputs (n) MFs/Input Rule Count MF Type(s)
Wind speed prediction (Timur et al., 2021) 6 3 729 Gaussian
Power plant prediction (Pa et al., 2022) 3 3 27 Gaussian
Quadcopter control (Al-Fetyani et al., 2020) 2 5 25 Generalized bell
Bubble column reactor (Shamshirband et al., 2019) 4 4 256 Gbell, Gaussian, etc.

4. Integration with Optimization and Hybrid Systems

ANFIS can be embedded in composite frameworks or extended by global optimization:

  • Feature selection hybrids: The ABC–ANFIS system integrates the artificial bee colony algorithm as an outer optimization wrapper. ABC searches for informative predictors from high-dimensional NIR spectra for PLA molecular weight prediction, directly minimizing ANFIS RMSE plus model sparsity. The result is rapid convergence on the most chemically significant features and substantially reduced input dimensionality, enhancing interpretability and predictive accuracy (Masoumi et al., 13 Jan 2025).
  • Domain knowledge fusion: In reservoir characterization, DKFIS integrates domain-knowledge-based rule checking with SVM classification and ANFIS regression, iteratively applying Q-filter corrections before and after regression. Expert qualitative rules constrain ANFIS outputs to physically plausible regions, increasing the correlation coefficient (CC) from ≈0.91 (pure ANFIS) to ≈0.95 and reducing RMSE/AEM/SI accordingly (Chaki et al., 2016).
  • Metaheuristic-optimized ANFIS: PSO-ANFIS and MLP-GWO hybrids use metaheuristic search to refine membership function parameters, further boosting accuracy and robustness on classification and regression tasks (Rajabi et al., 2019, Claywell et al., 2020).
  • Neuro-symbolic compression (KANFIS): The Kolmogorov–Arnold Neuro-Fuzzy Inference System addresses the exponential growth in rules by replacing the product T-norm with additive aggregation (Kolmogorov–Arnold functional decomposition) and sparse rule masking. This reduces complexity from exponential to linear in the number of inputs, enabling tractable application to high-dimensional domains while maintaining transparency in the learned fuzzy rules (Yong et al., 3 Feb 2026).

5. Applications in Regression, Forecasting, and Control

ANFIS has demonstrated strong efficacy in diverse application domains where interpretability and transparent error handling are critical:

  • Time series forecasting: Accurate predictions of wind speed (Timur et al., 2021), power generation in energy systems (Pa et al., 2022), and solar diffuse fraction (Claywell et al., 2020) are consistently achieved with high regression coefficients (R2>0.95R^2>0.95) and low RMSE/MSE. Dense rule bases (hundreds of rules) and appropriate input dimensionality are critical to performance (Shamshirband et al., 2019).
  • Process control: In embedded control, such as quadcopter stabilization, ANFIS-based controllers outperform classical PD controllers, achieving faster settling, lower overshoot, and smoother actuation by extending effective operation across a nonlinear regime (Al-Fetyani et al., 2020).
  • Scientific regression and physics: For baryon-to-meson ratio estimation in high-energy physics, ANFIS (grid-partitioned, 3 MFs/input, nine rules) reaches sub-10710^{-7} MSE—outperforming both neural networks and established Monte Carlo event generators (Habashy et al., 2022).
  • Reinforcement learning: Incorporation of ANFIS within actor–critic methods (e.g., Proximal Policy Optimization) yields interpretable, sample-efficient, and stable policies for control environments such as CartPole, with lower variance and higher reproducibility compared to neural-network-only agents (Shankar et al., 22 Jun 2025).
  • Uncertainty modeling: Extensions to interval type-2 fuzzy sets and additive rule bases (KANFIS) explicitly represent epistemic uncertainty and control rule complexity on high-dimensional medical or engineering datasets (Yong et al., 3 Feb 2026).

6. Architectural Limitations and Advances

Classical ANFIS faces scalability issues:

  • Curse of dimensionality: Rule base size RR grows exponentially with the number of inputs, capping the practical input dimensionality at \approx5–6 without sparsity or rule-generation heuristics (Yong et al., 3 Feb 2026).
  • Parameter explosion and computational cost: The number of trainable parameters expands rapidly (O(Mn)O(M^n)), affecting both memory footprint and learning stability in large systems.

Recent work addresses these limitations:

  • Sparse rule selection (ABC, PSO): By using metaheuristic-driven network structure optimization, input and rule selection can be performed robustly and efficiently (Masoumi et al., 13 Jan 2025, Rajabi et al., 2019).
  • Neuro-symbolic compaction (KANFIS): Additive rule decomposition and mask-based gating provide linear scaling and increase interpretability by ensuring that rules focus on small, semantically meaningful input subsets (Yong et al., 3 Feb 2026).
  • Meta-learning and policy optimization integration: Embedding fuzzy reasoning into differentiable RL frameworks allows for true end-to-end neuro-fuzzy optimization (Shankar et al., 22 Jun 2025).

7. Performance Metrics and Practical Guidelines

ANFIS performance is typically quantified via:

Empirical findings underscore key implementation strategies:

  • For moderate nn, M=3M=3–4 MFs per input balances expressivity and computational tractability.
  • Hybrid least-squares plus backpropagation learning ensures rapid and stable parameter convergence across diverse tasks.
  • Rule base interpretability and physical/chemical congruence (domain-knowledge rules, constraint filters) enhance model reliability and trustworthiness (Chaki et al., 2016).

References

ANFIS remains a robust, interpretable machine learning methodology for regression, control, and hybrid expert-system domains. With the ongoing development of scalable neuro-fuzzy architectures and integration with stochastic optimization and reinforcement learning, ANFIS continues to extend its reach in scientific and engineering applications requiring both transparency and adaptive power.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Adaptive Network-Based Fuzzy Inference System (ANFIS).