Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 82 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 17 tok/s Pro
GPT-4o 107 tok/s Pro
Kimi K2 174 tok/s Pro
GPT OSS 120B 468 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

PRKAN: Parameter-Reduced Kolmogorov-Arnold Networks (2501.07032v4)

Published 13 Jan 2025 in cs.LG

Abstract: Kolmogorov-Arnold Networks (KANs) represent an innovation in neural network architectures, offering a compelling alternative to Multi-Layer Perceptrons (MLPs) in models such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Transformers. By advancing network design, KANs drive groundbreaking research and enable transformative applications across various scientific domains involving neural networks. However, existing KANs often require significantly more parameters in their network layers than MLPs. To address this limitation, this paper introduces PRKANs (Parameter-Reduced Kolmogorov-Arnold Networks), which employ several methods to reduce the parameter count in KAN layers, making them comparable to MLP layers. Experimental results on the MNIST and Fashion-MNIST datasets demonstrate that PRKANs outperform several existing KANs, and their variant with attention mechanisms rivals the performance of MLPs, albeit with slightly longer training times. Furthermore, the study highlights the advantages of Gaussian Radial Basis Functions (GRBFs) and layer normalization in KAN designs. The repository for this work is available at: https://github.com/hoangthangta/All-KAN.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper introduces Parameter-Reduced Kolmogorov-Arnold Networks (PRKANs) to significantly reduce the parameter count in KANs while achieving performance competitive with MLPs on MNIST and Fashion-MNIST.
  • PRKANs utilize attention mechanisms, dimension summation, feature weight vectors, and convolutional or pooling layers to achieve parameter reduction in KAN layers.
  • Experiments indicate that Gaussian Radial Basis Functions and layer normalization improve PRKAN performance, achieving accuracy competitive with MLPs despite requiring slightly longer training times.

The paper introduces Parameter-Reduced Kolmogorov-Arnold Networks (PRKANs) as a method to reduce the number of parameters in Kolmogorov-Arnold Networks (KANs) to be comparable with Multi-Layer Perceptrons (MLPs). The authors present experimental results on MNIST and Fashion-MNIST datasets demonstrating that PRKANs with attention mechanisms rival the performance of MLPs, with slightly longer training times. The authors also highlight the advantages of Gaussian Radial Basis Functions (GRBFs) and layer normalization in KAN designs.

The paper begins by addressing the Kolmogorov-Arnold Representation Theorem (KART), which posits that any continuous function involving multiple variables can be decomposed into a sum of continuous functions of single variables. The authors note that while KANs have shown promise in various applications, they often require significantly more parameters than MLPs.

The core contributions of the paper include:

  • The development of PRKANs, which employ attention mechanisms, dimension summation, feature weight vectors, and convolutional/pooling layers to reduce the parameter count in KAN layers.
  • A demonstration of the competitive performance of PRKANs compared to MLPs on the MNIST and Fashion-MNIST datasets.
  • An exploration of components, such as GRBFs and layer normalization, that can contribute to the output performance of PRKANs.

The paper details the methodology behind PRKANs, including:

  • A review of KART and its implications for neural network design.
  • A discussion of the design of KAN architectures, including the use of learnable activation functions such as B-splines.
  • An analysis of the parameter requirements in KANs versus MLPs, highlighting the need for parameter reduction techniques.
  • A description of the proposed PRKAN architecture, which incorporates attention mechanisms, dimension summation, feature weight vectors, and convolutional/pooling layers to reduce the number of parameters in KAN layers. The authors present equations defining the operation of each of these components, including:
    • Attention Mechanism: XsplineRB×D×(G+k)X_{spline} \in \mathbb{R}^{B \times D \times (G + k)} which represents the spline data. BB is the batch size, DD is the data dimension, GG is the grid size of a function, and kk is the spline order. $X_{\text{linear} = W_{\text{linear} \times X_{spline} + b_{\text{linear}, \quad X_{\text{linear} \in \mathbb{R}^{B \times D \times 1}$ where $W_{\text{linear}$ and $b_{\text{linear}$ are the weight and bias of a linear transformation. $W_{\text{att} = softmax(X_{\text{linear}, \text{dim}=-2), \quad W_{\text{att} \in \mathbb{R}^{B \times D \times 1}$ where $W_{\text{att}$ represents the attention weights. $X' = X_{spline} \odot W_{\text{att}, \quad X' \in \mathbb{R}^{B \times D \times (G + k)}$ where \odot denotes element-wise multiplication. X=dim=1X,XRB×DX'' = \sum_{\text{dim}=-1} X', \quad X'' \in \mathbb{R}^{B \times D} where XX'' is the summation along the last dimension. $X_{out} = W_{\text{out} \times \sigma (X'') + b_{\text{out}, \quad X_{out} \in \mathbb{R}^{B \times d_{\text{out}}$ where $W_{\text{out}$ and $b_{\text{out}$ are the weight and bias of a linear transformation, σ\sigma is an activation function, and $d_{\text{out}$ is the output dimension.
    • Convolution Layers: XsplineRB×D×(G+k)X_{spline} \in \mathbb{R}^{B \times D \times (G+k)} which represents the spline data. $X_{\text{perm} = permute(X_{spline}, 0, 2, 1), \quad X_{\text{perm} \in \mathbb{R}^{B \times (G+k) \times D}$ where $X_{\text{perm}$ is the permuted tensor. $X_{\text{conv} = W_{\text{conv} \times X_{\text{perm} + b_{\text{conv}, \quad X_{\text{conv} \in \mathbb{R}^{B \times 1 \times D}$ where $W_{\text{conv}$ and $b_{\text{conv}$ are the weight and bias of the 1D convolution. $X_{\text{squeeze} = squeeze(X_{\text{conv},1), \quad X_{\text{squeeze} \in \mathbb{R}^{B \times D}$ where $X_{\text{squeeze}$ is the squeezed tensor. $X_{\text{out} = W_{\text{out} \times \sigma (X_{\text{squeeze}) + b_{\text{out}, \quad X_{\text{out} \in \mathbb{R}^{B \times d_{out}}$ where $W_{\text{out}$ and $b_{\text{out}$ are the weight and bias of a linear transformation, σ\sigma is an activation function, and $d_{\text{out}$ is the output dimension.
    • Convolution Layers + Pooling Layers: XsplineRB×D×(G+k)X_{spline} \in \mathbb{R}^{B \times D \times (G+k)} which represents the spline data. $X_{\text{perm} = permute(X_{spline}, 0, 2, 1), \quad X_{\text{perm} \in \mathbb{R}^{B \times (G+k) \times D}$ where $X_{\text{perm}$ is the permuted tensor. $X_{\text{conv} = W_{\text{conv} \times X_{\text{perm} + b_{\text{conv}, \quad X_{\text{conv} \in \mathbb{R}^{B \times (G+k) \times D}$ where $W_{\text{conv}$ and $b_{\text{conv}$ are the weight and bias of the 1D convolution. $X_{\text{pool} = pool(X_{\text{conv}), \quad X_{\text{pool} \in \mathbb{R}^{B \times (G+k) \times \frac{D}{G+k}}$ where $X_{\text{pool}$ is the max pooling. $X_{\text{reshaped} = reshape(X_{\text{pool}), \quad X_{\text{reshaped} \in \mathbb{R}^{B \times D}$ where $X_{\text{reshaped}$ is the reshaped tensor. $X_{\text{out} = W_{\text{out} \times X_{\text{reshaped} + b_{\text{out}, \quad X_{\text{out} \in \mathbb{R}^{B \times d_{\text{out}}$ where $W_{\text{out}$ and $b_{\text{out}$ are the weight and bias of a linear transformation, and $d_{\text{out}$ is the output dimension.
    • Dimension Summation: $X_{\text{spline} \in \mathbb{R}^{B \times D \times (G+k)}$ which represents the spline data. X=dim=1Xspline,XRB×DX' = \sum_{\text{dim}=-1} X_{spline}, \quad X' \in \mathbb{R}^{B \times D} where XX' is the summation along the last dimension. $X_{\text{out} = W_{\text{out} \times \sigma(X') + b_{\text{out}, \quad X_{\text{out} \in \mathbb{R}^{B \times d_{\text{out}}$ where $W_{\text{out}$ and $b_{\text{out}$ are the weight and bias of a linear transformation, σ\sigma is an activation function, and $d_{\text{out}$ is the output dimension.
    • Feature Weight Vectors: XsplineRB×D×(G+k)X_{spline} \in \mathbb{R}^{B \times D \times (G+k)} which represents the spline data. X=Xspline×M,MR(G+k)×1,XRB×DX' = X_{spline} \times M, \quad M \in \mathbb{R}^{(G+k) \times 1}, \quad X' \in \mathbb{R}^{B \times D} where MM is the learnable feature vector. $X_{\text{out} = W_{\text{out} \times \sigma(X') + b_{\text{out}, \quad X_{\text{out} \in \mathbb{R}^{B \times d_{\text{out}}$ where $W_{\text{out}$ and $b_{\text{out}$ are the weight and bias of a linear transformation, σ\sigma is an activation function, and $d_{\text{out}$ is the output dimension.
  • A discussion of data normalization techniques, such as batch normalization and layer normalization, and their impact on model performance.

The paper presents experimental results comparing PRKANs with MLPs on the MNIST and Fashion-MNIST datasets. The authors trained each model over 5 independent runs and reported average values for metrics such as training accuracy, validation accuracy, F1 score, and training time. The results showed that PRKANs with attention mechanisms achieved competitive performance compared to MLPs, with slightly longer training times. The authors also found that GRBFs and layer normalization generally provided more benefits when applied to PRKANs. For example, with batch normalization, PRKAN-attn models achieve an improvement of 1.34% and 0.58% in validation accuracy on MNIST and Fashion-MNIST, respectively. With layer normalization, PRKAN-attn achieves a validation accuracy of 97.46% on MNIST, trailing MLP by a margin of less than 0.26\%.

The paper includes ablation studies on the activation functions used in PRKANs, a comparison between RBFs and B-splines, and a suggestion for the positioning of data normalization in PRKANs. The ablation paper on activation functions showed that SiLU achieved the best validation accuracy and F1 score on the MNIST dataset while delivering competitive performance on Fashion-MNIST. The comparison between RBFs and B-splines showed that RBFs were 11% to 13% faster than B-splines. The paper on the positioning of data normalization showed that layer normalization was generally more effective than batch normalization.

The paper concludes by discussing the limitations of the research and suggesting directions for future work. The authors note that the PRKANs were tested on relatively simple datasets and that more research is needed to evaluate the scalability and efficiency of PRKANs in more complex models. The authors also suggest exploring other parameter reduction strategies, such as tensor decomposition, matrix factorization, or advanced pruning.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube