Papers
Topics
Authors
Recent
Search
2000 character limit reached

Exponentially Weighted l_2 Regularization Strategy in Constructing Reinforced Second-order Fuzzy Rule-based Model

Published 2 Jul 2020 in cs.LG and stat.ML | (2007.01208v1)

Abstract: In the conventional Takagi-Sugeno-Kang (TSK)-type fuzzy models, constant or linear functions are usually utilized as the consequent parts of the fuzzy rules, but they cannot effectively describe the behavior within local regions defined by the antecedent parts. In this article, a theoretical and practical design methodology is developed to address this problem. First, the information granulation (Fuzzy C-Means) method is applied to capture the structure in the data and split the input space into subspaces, as well as form the antecedent parts. Second, the quadratic polynomials (QPs) are employed as the consequent parts. Compared with constant and linear functions, QPs can describe the input-output behavior within the local regions (subspaces) by refining the relationship between input and output variables. However, although QP can improve the approximation ability of the model, it could lead to the deterioration of the prediction ability of the model (e.g., overfitting). To handle this issue, we introduce an exponential weight approach inspired by the weight function theory encountered in harmonic analysis. More specifically, we adopt the exponential functions as the targeted penalty terms, which are equipped with l2 regularization (l2) (i.e., exponential weighted l2, ewl_2) to match the proposed reinforced second-order fuzzy rule-based model (RSFRM) properly. The advantage of el 2 compared to ordinary l2 lies in separately identifying and penalizing different types of polynomial terms in the coefficient estimation, and its results not only alleviate the overfitting and prevent the deterioration of generalization ability but also effectively release the prediction potential of the model.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.