Papers
Topics
Authors
Recent
Search
2000 character limit reached

Olfactory EEG Signal Classification Network

Updated 20 January 2026
  • The paper introduces OESCN, a deep learning framework that classifies olfactory EEG signals using adaptive frequency band extraction and subject-specific attention.
  • It employs a four-stage pipeline—PSD estimation, multi-scale band generation, attention mechanism, and a spatio-spectral CNN—to process and classify EEG data.
  • Benchmarking shows OESCN achieves 97.1% accuracy, outperforming EEGNet by 13.3% and reducing inter-subject variability significantly.

The Olfactory EEG Signal Classification Network (OESCN) is a deep learning framework specifically designed for classifying electroencephalogram (EEG) signals elicited by olfactory stimuli. The architecture emphasizes adaptive frequency band feature extraction and subject-specific spectral attention, coupled with a compact spatio-spectral convolutional neural network (CNN), achieving robust and high-accuracy classification across individuals for 13-class odor recognition tasks (Sun et al., 2022).

1. Architectural Overview

OESCN is constructed as a four-stage pipeline optimized for extracting discriminative representations from olfactory-induced EEG. The stages are:

  1. Pre-processing & PSD Estimation: Raw EEG data XRC×TX \in \mathbb{R}^{C \times T} (C channels, T time samples) are processed via Welch’s periodogram per channel, producing a power spectral density (PSD) representation FRC×PF \in \mathbb{R}^{C \times P}, where PP denotes frequency bins over 0.5–70 Hz.
  2. Frequency Band Generator: A sliding-window mechanism extracts candidate frequency sub-bands over the PSD, aggregating multi-scale bandwise features into SRC×KS \in \mathbb{R}^{C \times K}.
  3. Frequency Band Attention Mechanism: Subject-specific attention is imposed on bandwise features: a multi-head self-attention incorporates a global and several local heads, followed by head fusion and a skip connection, yielding a re-weighted spatio-spectral map MRC×KM' \in \mathbb{R}^{C \times K}.
  4. Spatio-Spectral CNN Classifier: MM' serves as a single-channel 2D “image,” processed via parallel convolutions of varying kernel sizes, pooled, and passed through fully-connected (FC) layers to output a 13-way softmax for odor identification.

The data-flow is: EEG input XWelch PSDBand GeneratorAttentionCNNsoftmax\text{EEG input}~ X \rightarrow \text{Welch PSD} \rightarrow \text{Band Generator} \rightarrow \text{Attention} \rightarrow \text{CNN} \rightarrow \text{softmax}.

2. Frequency Band Generation

The frequency band generator performs an exhaustive, multi-scale sweep over the PSD for each channel:

  • For window lengths Li{1,5,10,15,20}L_i \in \{1, 5, 10, 15, 20\} Hz, with G=1G = 1 Hz step, the number of bands per LiL_i:

Bci=PLiGB_{ci} = \left\lfloor \frac{P - L_i}{G} \right\rfloor

  • For each slice Ac,iR1×Li×BciA^{c, i} \in \mathbb{R}^{1 \times L_i \times B_{ci}}, average spectral power over LiL_i:

Djc,i=1Lik=1LiAk,jc,ifor j=1,...,BciD^{c, i}_j = \frac{1}{L_i} \sum_{k=1}^{L_i} A^{c, i}_{k, j} \quad \text{for}~ j = 1, ..., B_{ci}

  • Concatenate all Dc,iD^{c, i} to form ScR1×KS_c \in \mathbb{R}^{1 \times K}, K=i=15BciK = \sum_{i=1}^5 B_{ci}.
  • Band-combination tensor SRC×KS \in \mathbb{R}^{C \times K} is obtained by stacking across channels.

This process enables high-resolution, adaptive capturing of both narrow and broad frequency features, essential for encoding the olfactory event-related EEG.

3. Frequency Band Attention Mechanism

This module adaptively emphasizes subject-relevant bands through multi-head self-attention:

  • Global Head:
    • Linear transformations:

    Qglo=SWq,  Kglo=SWk,  Vglo=SWvQ^{\mathrm{glo}} = S W^q ,~~ K^{\mathrm{glo}} = S W^k ,~~ V^{\mathrm{glo}} = S W^v

    where W()RK×KW^{(\cdot)} \in \mathbb{R}^{K \times K}. - Scaled dot-product attention:

    Hglo=Softmax(Qglo(Kglo)C)Vglo  RC×KH^{\mathrm{glo}} = \mathrm{Softmax}\left(\frac{Q^{\mathrm{glo}} (K^{\mathrm{glo}})^\top}{\sqrt{C}}\right) V^{\mathrm{glo}}~\in~\mathbb{R}^{C \times K}

  • Local Heads (for each LiL_i): Apply attention as above to each SiRC×BciS^i \in \mathbb{R}^{C \times B_{ci}}.

  • Head Fusion:

    • Concatenate outputs: X=Concat(Hglo,Hloc)X = \mathrm{Concat}(H^{\mathrm{glo}}, H^{\mathrm{loc}}).
    • Max-pooling and average-pooling:

    Xmax=MaxPool(X),Xavg=AvgPool(X)X_{\max} = \mathrm{MaxPool}(X), \qquad X_{\mathrm{avg}} = \mathrm{AvgPool}(X) - Fuse via 1×11\times1 convolution:

    M=Conv1×1(Concat(Xmax,Xavg))M = \mathrm{Conv}_{1\times1}\left( \mathrm{Concat}(X_{\max}, X_{\mathrm{avg}}) \right) - Add skip connection:

    M=M+SM' = M + S

The attention mechanism is explicitly trained for each subject’s dataset split, ensuring adaptation to inter-subject spectral variability.

4. Spatio-Spectral CNN Classifier

The CNN receives MRC×KM' \in \mathbb{R}^{C \times K} interpreted as a single-channel image of size (C,K)(C, K):

  • Layer 0: Reshape to (1,C,K)(1, C, K).

  • Layer 1: Parallel 2D convolutions:

    • 3×33\times3, 8×88\times8, and 15×1515\times15 kernels (F1F_1 filters each), ELU activations, concatenated → (3F1,C,K)(3F_1, C, K).
  • Layer 2: Average-pooling across spatial dimensions.
  • Layer 3: 3×33\times3 conv with F2F_2 filters (ELU), average-pooling.
  • Layer 4: Flatten, FC (128 units, ELU, BN, Dropout 0.25), FC (64 units, ELU, BN, Dropout 0.25), FC (13 units, softmax).

Typical hyperparameters: F1=32F_1=32, F2=64F_2=64, stride=1, padding="same". This structure exploits electrode (spatial) and frequency-band (spectral) topology.

5. Training and Evaluation Protocol

  • Dataset: 11 healthy subjects, 13 odors, 35 trials each, yielding 5,005 total trials. EEGs are sampled at 1 kHz over 32 channels (30 analyzed), 10 seconds per trial.
  • Pre-processing: PSD extracted via Welch’s method (Hamming window, 200 samples, overlap 8).
  • Cross-validation: 10-fold, per subject.
  • Optimization: Cross-entropy loss, Adam (1×1041\times10^{-4}), 500 epochs, batch size 39.

Performance Benchmarking:

OESCN was benchmarked against EEGNet and AFBD-SVM; results are summarized below:

Method Average Accuracy (%) Inter-subject Std
EEGNet 83.8 12.3
AFBD-SVM 87.3 10.2
OESCN 97.1 3.7

OESCN delivers a 13.3% gain over EEGNet and substantially reduces inter-subject variability.

6. Ablation Analysis

Two OESCN variants were constructed to assess contribution of each module:

  • OESCN_a1: removes the attention mechanism, passing SS directly to the CNN.
  • OESCN_a2: removes both band generator and attention, using fixed uniform sub-bands plus CNN.
Variant Avg Acc (%) Inter-subj Std
OESCN_a2 94.3 5.3
OESCN_a1 95.9 4.5
OESCN 97.1 3.7

Removing attention reduces accuracy by ≈1.2%. Eliminating both generator and attention further reduces accuracy by ≈1.6%. This demonstrates that both exhaustive band extraction and subject-specific attention are critical for top performance and robustness.

7. Algorithmic Workflow

A single training epoch for one subject proceeds as:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
for each minibatch {X, y}:
    # 1. Compute PSD
    F = Welch_PSD(X)                   # shape (batch, C, P)
    # 2. Band generator
    S = []
    for i in 1..5:
        Li = {1,5,10,15,20}[i]
        slide_i = sliding_windows(F, length=Li, step=1)
        D_i = mean_over_window(slide_i)  # shape (batch, C, B_i)
        S.append(D_i)
    S = concat_along_band_dim(S)         # shape (batch, C, K)
    # 3. Attention
    H_glo = SelfAttention(S)             # global head
    H_loc = concat(SelfAttention(splits of S))  # local heads
    M = Conv1x1(concat(maxpool,[H_glo,H_loc], avgpool,[H_glo,H_loc]))
    M_prime = M + S
    # 4. CNN Classifier
    logits = CNN(M_prime)                # shape (batch, 13)
    loss = CrossEntropy(logits, y)
    backpropagate & update params

The design leverages a comprehensive multiscale frequency extraction strategy and lightweight subject-specific attention. This hybrid architecture yields state-of-the-art olfactory EEG classification accuracy and significantly enhanced inter-subject robustness (Sun et al., 2022).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Olfactory EEG Signal Classification Network (OESCN).