Papers
Topics
Authors
Recent
2000 character limit reached

Multi-Scale Gaussian KAN (MSGKAN)

Updated 16 November 2025
  • MSGKAN is a nonlinear feature transform module that augments spatial features with multi-scale Gaussian RBF embeddings for effective scale-adaptive detection.
  • It integrates a concise local convolutional encoder to fuse complementary spatial and frequency-domain features within the SFFR architecture.
  • Empirical results on the SeaDroneSee dataset show improved mAP, confirming its efficacy in handling variable UAV altitudes and multi-scale challenges.

The Multi-Scale Gaussian Kolmogorov–Arnold Network (MSGKAN) is a nonlinear feature transform module introduced within the SFFR (Spatial-Frequency Feature Reconstruction) architecture for multispectral aerial object detection. MSGKAN augments intermediate spatial-domain features with multi-scale Gaussian Radial Basis Function (RBF) embeddings inspired by Kolmogorov–Arnold decomposition, enhancing the model's adaptability to variable object scales and robustifying detector performance under changing UAV flight altitudes. By parameterizing learnable Gaussian centers and incorporating a concise local convolutional encoder, MSGKAN achieves effective nonlinear feature modeling tailored to both fine- and coarse-scale image structures germane to remote sensing and UAV scenarios.

1. MSGKAN in SFFR: Role and Architectural Integration

MSGKAN operates as a spatial-domain feature reconstruction module in the dual-branch KANFusion block of SFFR, targeting intermediate feature maps MiM_i from individual sensor modalities (e.g., RGB or IR). For a batch-dimensioned input tensor MiRB×H×W×CinM_i \in \mathbb{R}^{B \times H \times W \times C_{in}}, MSGKAN transforms each local feature vector at spatial position pp through a non-linear expansion in a multi-scale Gaussian space, followed by a small convolutional encoder. MSGKAN's output is then combined with a complementary frequency-domain feature from the FCEKAN module using learnable weights α,β\alpha,\beta: MRi=αConv(fgus(MRi))+βfCross(MRi,MTi)M'_{R_i} = \alpha\,\mathrm{Conv}\bigl(f_{\mathrm{gus}}(M_{R_i})\bigr) + \beta\,f_{\mathrm{Cross}}(M_{R_i}, M_{T_i}) An analogous path exists for the other branch, ensuring joint enrichment of features in both spatial and frequency domains, prior to cross-modal fusion.

2. Multi-Scale Gaussian Basis Design

MSGKAN constructs a nonlinear embedding for each feature vector xRdx \in \mathbb{R}^{d} (where d=Cind=C_{in}) using a set of NN learnable RBF centers {cn}n=1N\{c_n\}_{n=1}^N and a fixed set of KK Gaussian bandwidths {hj}j=1K\{h_j\}_{j=1}^K. The Gaussian basis functions are defined as: ϕn,j(x)=exp(xcn22hj2)\phi_{n,j}(x) = \exp\left(-\frac{\|x - c_n\|^2}{2h_j^2}\right) where the scale parameters hjh_j (e.g., h{17,37,57}h \in \{\frac{1}{7}, \frac{3}{7}, \frac{5}{7}\}) control the width of the radial response and thereby dictate the receptive field of each basis. Each basis essentially identifies feature similarity at a particular scale and with respect to a particular learned centroid.

3. Nonlinear Mapping and KAN Style Embedding

MSGKAN adopts the Kolmogorov–Arnold paradigm by expressing nonlinear transformations as sums over univariate mappings and their compositions. For each spatial position pp, the input feature Mi(p)M_i(p) is projected into the multi-scale Gaussian basis and linearly combined using a set of learnable weights wn,jw_{n,j} (one per center-scale pair), typically realized via a pointwise convolution: fgus(Mi(p))=n=1Nj=1Kwn,jexp(Mi(p)cn22hj2)f_{\mathrm{gus}}(M_i(p)) = \sum_{n=1}^{N}\sum_{j=1}^{K} w_{n,j} \, \exp\left(-\frac{\| M_i(p) - c_n \|^2}{2h_j^2}\right)

Mi(p)=Conv(fgus(Mi(p)))M'_i(p) = \mathrm{Conv}(f_{\mathrm{gus}}(M_i(p)))

Here, the Conv operator is generally a compact 1×11 \times 1 (or optionally 3×33 \times 3) convolution that restores spatial context after global RBF embedding.

4. Algorithmic Steps for Scale-Adaptive Feature Modeling

MSGKAN's procedure to encode scale variance at each spatial location can be summarized as follows:

  • (a) Layer Normalization: Normalize MiM_i across channels (LayerNorm(Mi)\mathrm{LayerNorm}(M_i)) to achieve zero mean and unit variance.
  • (b) Distance Computation: For each spatial location pp, compute Euclidean distances rn(p)=Mi(p)cnr_n(p) = \| M_i(p) - c_n \| to the NN centers.
  • (c) Multi-Scale RBF Expansion: Compute ϕn,j(p)=exp(rn2(p)/(2hj2))\phi_{n,j}(p) = \exp\left(-r_n^2(p) / (2h_j^2)\right) for each scale jj. Smaller hjh_j are sensitive to fine-details, larger hjh_j accommodate broader, large-scale structures.
  • (d) Weighted Summation: Multiply each ϕn,j(p)\phi_{n,j}(p) by corresponding learnable weight wn,jw_{n,j} and sum across all center–scale pairs, enabling the model to emphasize certain scales for specific scenes.
  • (e) Local Convolution: Integrate the resulting embedding to output features of dimensionality CoutC_{out} using a local Conv, allowing spatial mixing and channel reweighting.

Because real-world scale variations in aerial imagery are linked to UAV altitude, a fixed bank of Gaussian widths provides adaptability to object size changes without downstream modifications to backbone feature map resolution.

5. Training Paradigm and Loss Integration

MSGKAN does not receive a separate, module-specific loss function but is optimized indirectly via losses imposed by the full SFFR detection architecture. Training employs standard multi-task detection objectives:

  • Varifocal Loss for classification confidence:

VFL(p,q)={q[qlogp+(1q)log(1p)],q>0 αpγlog(1p),q=0\mathrm{VFL}(p, q) = \begin{cases} -q \left[q \log p + (1-q) \log (1-p)\right], & q > 0 \ -\alpha\,p^{\gamma}\,\log(1-p), & q=0 \end{cases}

which adaptively emphasizes challenging positive detections.

  • Box Regression via 1\ell_1 and IoU-style penalization.

The joint gradient flow from classification and regression objectives guides the adaptation of RBF width parameters {hj}\{h_j\} and the linear weights wn,jw_{n,j}, leading to learned specialization on dataset-specific scale statistics.

6. Empirical Performance and Scale Robustness

Comprehensive experimental validation on the SeaDroneSee dataset demonstrates that the inclusion of MSGKAN yields measurable improvements in object detection performance:

  • Baseline (without MSGKAN): mAP50_{50} = 61.1%, mAP = 31.4%.
  • With MSGKAN: mAP50_{50} = 62.2% (+1.1%), mAP = 32.2% (+0.8%).
  • A systematic sweep of scale parameters (Table V) revealed that using scale widths [1/7,3/7,5/7][1/7, 3/7, 5/7] achieves optimal results with mAP50_{50} = 66.0%, mAP = 32.5%, outperforming both coarser ([1/3,2/3][1/3, 2/3]) and denser ([1/9,3/9,5/9,7/9][1/9, 3/9, 5/9, 7/9]) settings.

These findings substantiate the critical contribution of multi-scale Gaussian embeddings for dynamic scale adaptation and indicate the presence of a sweet-spot in the bank of scales for robust aerial object detection performance.

7. Operational Principles and Significance

MSGKAN exemplifies a KAN-inspired submodule that elevates each feature vector into a high-dimensional Gaussian RBF manifold, applies end-to-end-learned scale-emphasizing weights, and reprojects these representations back to a task-compatible feature space by lightweight local convolution. This design confers robust, data-driven adaptability to object size changes caused by varying UAV altitudes, obviating the need for explicit multi-scale image resizing or architectural modifications in the backbone. Empirical evidence confirms that this approach substantially improves multispectral object detection benchmarks, both in accuracy and scale-invariance, rendering it directly applicable to real-world UAV perception pipelines for heterogeneous environments.

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Multi-Scale Gaussian KAN (MSGKAN).