Papers
Topics
Authors
Recent
Search
2000 character limit reached

ESAM Kernel in Hyperspectral Classification

Updated 23 January 2026
  • ESAM kernel is a spectral-similarity Mercer kernel that measures the angular distance between hyperspectral vectors, ensuring brightness invariance.
  • It integrates with SVM, GP, and MRF frameworks by tuning hyperparameters, thereby influencing spatial-spectral classification outcomes.
  • Empirical evaluations reveal that while ESAM underperforms SE kernels in accuracy, its computational benefits and angular robustness are notable.

The Exponential Spectral Angle Mapper (ESAM) kernel is a spectral-similarity-based Mercer kernel and covariance function designed for supervised classification of hyperspectral imagery. ESAM quantifies the angular distance between hyperspectral vectors and maps this to a positive-definite kernel function, providing rotational invariance with respect to brightness. ESAM has been incorporated into both support vector machines (SVMs) and Gaussian process (GP) classifiers for pixel-wise and spatial-spectral classification tasks, notably in grid-structured Markov random field (MRF) frameworks for remote sensing image analysis (Gewali et al., 2016).

1. Definition and Mathematical Formulation

The ESAM kernel is built upon the Spectral Angle Mapper (SAM) metric. For two dd-dimensional hyperspectral vectors x,yRd\mathbf{x},\mathbf{y} \in \mathbb{R}^d, the SAM is defined as: θ(x,y)=cos1(xyxy)\theta(\mathbf{x},\mathbf{y}) = \cos^{-1} \left( \frac{\mathbf{x} \cdot \mathbf{y}}{\|\mathbf{x}\| \, \|\mathbf{y}\|} \right) This angle θ\theta serves as an illumination-invariant measure of similarity.

The ESAM kernel itself is given by: kESAM(x1,x2)=σ02exp(θ(x1,x2)σ12)k_{\mathrm{ESAM}}(\mathbf{x}_1, \mathbf{x}_2) = \sigma_0^2 \exp \left( -\frac{\theta(\mathbf{x}_1, \mathbf{x}_2)}{\sigma_1^2} \right) with hyperparameters σ02\sigma_0^2 (gain or signal variance) and σ12\sigma_1^2 (scale or length-scale), and θ\theta as in the SAM definition. The function exhibits monotonically decaying similarity as the spectral angle increases, and is positive definite by construction.

2. Hyperparameters and Their Interpretation

The ESAM kernel features two hyperparameters:

  • σ02\sigma_0^2 (gain): This parameter scales the overall amplitude of the kernel. In Gaussian process models, σ02\sigma_0^2 corresponds to the prior variance on the latent function. In SVMs, it simply scales the decision function without changing the location of classification boundaries.
  • σ12\sigma_1^2 (scale): This parameter regulates the rate of decay of the kernel as θ\theta increases. A smaller σ12\sigma_1^2 sets a stricter similarity threshold, causing rapid decay (only nearly identical spectra are similar), while a larger value allows broader similarity, treating vectors with larger spectral angles as relatively close.

Both parameters are selected via cross-validation or marginal likelihood maximization, depending on the classifier used. In SVM implementations, σ02\sigma_0^2 is typically fixed (often to 1), while σ1\sigma_1 and the soft margin constant CC are cross-validated. In GPs, both are learned from marginal likelihood optimization under positivity constraints (Gewali et al., 2016).

3. Integration with SVM, GP, and MRF Frameworks

The ESAM kernel replaces the conventional squared exponential (SE) kernel within SVM and GP classifiers for hyperspectral pixel classification.

  • SVM-ESAM: Utilizes kESAMk_{\mathrm{ESAM}} in multi-class SVMs (e.g., LIBSVM "one-versus-one"). Probabilistic class posteriors P(y=cx)P(y=c|\mathbf{x}) are estimated through Platt scaling or inherent SVM likelihood procedures. These are subsequently mapped to grid-structured unary MRF energies using Ei(yi=c)=lnP(yi=cxi)E_i(y_i=c) = -\ln P(y_i = c|\mathbf{x}_i).
  • GP-ESAM: Employs kESAMk_{\mathrm{ESAM}} as the covariance structure in GPs—binary GPs for each class pair, with fusion via Wu–Rangarajan's method. Hyperparameters are learned via marginal likelihood, with Laplace approximation performed for class probability inference. Unary MRF energies are constructed analogously.
  • Optionally, a pairwise Potts model with cost parameter β\beta is incorporated: Eij(yi,yj)={0,yi=yj β,yiyjE_{ij}(y_i, y_j) = \begin{cases} 0, & y_i = y_j \ \beta, & y_i \ne y_j \end{cases} Inference in the full MRF is performed by minimizing the total energy using expansion-move graph cuts.

4. Empirical Performance and Computational Aspects

Empirical studies using standard hyperspectral scenes (Indian Pines, University of Pavia) show that, in pixel-wise classification, both SVM-ESAM and GP-ESAM underperform their SE-kernel counterparts in terms of classification accuracy. When embedded within MRF frameworks to exploit spatial context, this trend persists: SE-MRF achieves higher accuracy across training set sizes.

Illustrative results for Indian Pines (50 pixels/class):

Method Accuracy (%) Running Time (s)
SVM-SE 73.4 ± 4.1 Comparable (kernel eval)
SVM-ESAM 71.8 ± 1.9 Comparable (kernel eval)
GP-SE 72.6 ± 1.9 O(n3)O(n^3) in training size
GP-ESAM 66.4 ± 2.0 O(n3)O(n^3) (as GP-SE)
SVM-SE-MRF 86.6 Longer
SVM-ESAM-MRF 85.6 Longer
GP-SE-MRF 87.3 44 (50 samples/class)
GP-ESAM-MRF 84.2 44 (50 samples/class)
SAM-MRF -- 3 (50 samples/class)

SVM and GP classifier runtimes are dominated by kernel evaluation and, for GPs, by cubic training time complexity. SAM-MRF, using only spectral angles without SVM/GP inference, is dramatically faster (3 seconds vs 44 seconds for GP-SE-MRF, 50 samples/class) (Gewali et al., 2016).

5. Guidelines for Hyperparameter Selection and Implementation

Standardized protocols for using the ESAM kernel in classification pipelines include:

  • Data preprocessing: Standardize each band of hyperspectral data (zero-mean, unit variance).
  • Parameter selection: For SVMs, fix σ02=1\sigma_0^2 = 1; cross-validate σ1\sigma_1 (typical range 10310^{-3} to 10310^3) and CC. For GPs, initialize σ02,σ12\sigma_0^2, \sigma_1^2 to 1 and optimize via type-II maximum likelihood.
  • MRF regularization: Cross-validate the spatial smoothness penalty β\beta (e.g., grid search over {0.01,0.1,1,10,100}\{0.01, 0.1, 1, 10, 100\}) on a held-out subset.
  • Workflow: Preprocess data, split into training/testing, train SVM or GP with ESAM, compute class probabilities, assign unary energies, construct the MRF, minimize total energy via graph cuts, and calculate accuracy.

A plausible implication is that the rigorous angular similarity property of ESAM may be particularly attractive in domains where brightness invariance is crucial; however, in standard classification metrics on benchmark hyperspectral scenes, this does not translate into superior performance relative to SE kernels for the evaluated pipelines.

6. Context and Comparative Landscape

ESAM provides a spectral angle–driven alternative to the SE kernel, with direct compatibility in probabilistic and margin-based classifiers and fully interchangeable with SE kernels in modern workflow design. The findings of (Gewali et al., 2016) indicate that, while ESAM is competitive and computationally comparable to SE (for both SVM and GP classifiers), its empirical performance is generally slightly lower on the tested benchmarks—both for pixel-wise and MRF-augmented settings.

A plausible implication is that the choice between ESAM and SE may depend on domain constraints or the particular characteristics of hyperspectral datasets, rather than on clear overall superiority of one kernel function over the other. The significantly lower running time of SAM-MRF (minimum angle as unary energy) also highlights the potential computational benefits when omitting the kernel-method stage.

7. Summary

The Exponential Spectral Angle Mapper (ESAM) kernel encodes angular similarity between hyperspectral vectors and is parameterized by gain and angular scale hyperparameters. It is readily integrated into SVMs, GPs, and MRFs for spatial-spectral classification. Empirical evidence indicates minor but consistent underperformance of ESAM relative to the squared exponential kernel on benchmark remote sensing tasks, though with similar computational requirements and established workflows for hyperparameter selection and implementation (Gewali et al., 2016).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Exponential Spectral Angle Mapper (ESAM) Kernel.