Contrastive and Multi-Task Learning on Noisy Brain Signals with Nonlinear Dynamical Signatures
Abstract: We introduce a two-stage multitask learning framework for analyzing Electroencephalography (EEG) signals that integrates denoising, dynamical modeling, and representation learning. In the first stage, a denoising autoencoder is trained to suppress artifacts and stabilize temporal dynamics, providing robust signal representations. In the second stage, a multitask architecture processes these denoised signals to achieve three objectives: motor imagery classification, chaotic versus non-chaotic regime discrimination using Lyapunov exponent-based labels, and self-supervised contrastive representation learning with NT-Xent loss. A convolutional backbone combined with a Transformer encoder captures spatial-temporal structure, while the dynamical task encourages sensitivity to nonlinear brain dynamics. This staged design mitigates interference between reconstruction and discriminative goals, improves stability across datasets, and supports reproducible training by clearly separating noise reduction from higher-level feature learning. Empirical studies show that our framework not only enhances robustness and generalization but also surpasses strong baselines and recent state-of-the-art methods in EEG decoding, highlighting the effectiveness of combining denoising, dynamical features, and self-supervised learning.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Explain it Like I'm 14
Overview
This paper is about teaching a computer to understand messy brain signals (EEG) better. The goal is to clean up the signals first and then use them to:
- tell if someone is doing a real movement or just imagining it, and
- recognize whether the brain’s signal patterns are “chaotic” or “non-chaotic.”
The authors build a two-stage system that first removes noise and then uses a smart model to learn several things at once, making the final results more accurate and reliable.
Key Objectives
The system learns three things at the same time. Here’s what it tries to do:
- Motor imagery classification: Decide if the EEG comes from a real movement or an imagined one.
- Chaos detection: Decide if the brain signal behaves in a chaotic way (very sensitive and unpredictable) or a non-chaotic way (more regular and stable).
- Contrastive representation learning: Learn strong, noise-resistant features by making two different versions of the same signal look similar to the computer, while making different signals look different.
Methods and Approach
Think of the approach like cleaning a blurry photo before using it in a facial recognition app. The system has two stages:
Stage 1: Denoising Autoencoder (DAE) — “Clean the signal”
- EEG signals are full of unwanted noise, like eye blinks or muscle twitches.
- A denoising autoencoder is a type of neural network that takes a noisy signal and learns to reconstruct a cleaner version. It’s like noise-canceling headphones for brain signals.
- The model is trained using many examples so it learns to keep the important parts of the EEG (the brain activity) and remove the junk.
Stage 2: Multitask Model — “Learn several things at once”
- After cleaning, the signal goes into a shared encoder made of:
- A convolutional network (CNN): like a magnifying glass that looks at short patterns in the signal.
- A Transformer: like a smart reader that pays attention to long-term relationships across time and across EEG channels (different electrodes on the head).
- From this encoder, the model branches into three “heads” (small modules), one for each task:
- Motor imagery head: predicts real vs. imagined movement.
- Chaos head: predicts chaotic vs. non-chaotic dynamics.
- Contrastive head: learns useful features by comparing two augmented versions of the same EEG (for example, slightly noisier or slightly rescaled) and pushing apart embeddings from different EEG trials.
What does “chaotic vs. non-chaotic” mean here?
- Imagine two runners starting very close together. If they quickly move far apart because tiny differences grow fast, that’s “chaos.”
- The paper uses tools from math called Lyapunov exponents to label signals as chaotic (positive exponent), periodic (negative), or quasiperiodic (around zero). For training, they mainly use “chaotic vs. non-chaotic.”
- They create these labels automatically (without manual labeling) using:
- A small RNN model that reconstructs the signal and lets them compute Lyapunov exponents, and
- Entropy-based measures (like how “random” the signal’s energy is in different frequencies) to cluster signals.
What is contrastive learning?
- It’s like a “spot-the-same-object” game for computers.
- You take one EEG trial, make two slightly different versions (add tiny noise, mask a small time chunk, or drop a channel), and tell the model: “These two are still the same person/trial—make them close together.” Different trials are pushed farther apart.
- This teaches the model to focus on the core patterns and ignore small disturbances.
Main Findings and Why They Matter
- Cleaning the signals first with the denoising autoencoder helps the rest of the system learn more stable and useful features.
- Training the model to do multiple tasks at once (motor imagery, chaos detection, contrastive learning) improves robustness to noise and generalizes better across different people and datasets.
- Results:
- Motor imagery classification improved compared to simple baselines (a modest but consistent gain).
- Chaos vs. non-chaos detection improved strongly.
- Across public EEG datasets (like BCI2000 and BNCI Horizon 2020), their system achieved higher F1 scores than several well-known methods, indicating better balanced performance.
- In short: combining cleaning, dynamics-aware learning (chaos), and contrastive learning makes the model both more accurate and more reliable in real-world, noisy EEG settings.
Implications and Potential Impact
- Better brain-computer interfaces (BCIs): Real-world EEG is messy. This approach makes BCIs more dependable, especially when data is limited or noisy.
- More interpretable brain dynamics: Using ideas like Lyapunov exponents brings mathematical insight to brain signals, potentially helping doctors and scientists understand brain states more clearly.
- Flexible and scalable: Because the system is modular (cleaning first, then multitask learning), it can be extended to new tasks (like tracking attention or mental workload) or new models (like graph neural networks that capture spatial relations between electrodes).
- Future directions:
- Detect more types of dynamics (periodic, quasiperiodic, no-attractor) for richer brain-state understanding.
- Use models that better capture how different EEG channels relate in space.
- Improve interpretability so clinicians can trust and understand the decisions.
Overall, this research shows a practical way to get more reliable information from noisy brain signals by cleaning them, learning from their deeper patterns, and training the model in a smart, multi-purpose way.
Knowledge Gaps
Unresolved knowledge gaps, limitations, and open questions
The following list summarizes what remains missing, uncertain, or unexplored in the paper, articulated as actionable gaps for future research:
- Validate chaos/non-chaos labels against established ground truth: benchmark the shPLRNN-derived Lyapunov labels and entropy-based clustering against classical estimators applied directly to EEG (e.g., Wolf/Rosenstein methods, correlation dimension, surrogate data tests), and report agreement (e.g., Cohen’s kappa, calibration).
- Clarify and correct the entropy-to-chaos mapping: rigorously test whether spectral/permutation entropy increases or decreases with chaotic dynamics using synthetic periodic/quasiperiodic/chaotic signals, and justify the directionality used for clustering-based “chaos” tags.
- Quantify label uncertainty and propagate it during training: treat chaos labels from unsupervised pipelines as probabilistic/soft labels; compare cross-entropy vs. label-smoothing, bootstrapping, or weakly supervised methods to reflect uncertainty.
- Assess sensitivity of chaos labels to pipeline hyperparameters: report how shPLRNN latent dimensionality, clipping, training length, window size, and entropy feature choices affect chaos/non-chaos assignments.
- Address discrete- vs continuous-time mismatch: justify using discrete-time Lyapunov classification on sampled EEG (a continuous-time process), and evaluate whether sampling rate and discretization alter LE sign interpretations.
- Move beyond binary chaos/non-chaos: develop multi-class dynamical regime labeling (periodic, quasiperiodic, chaotic, no-attractor) with validated criteria; analyze how regime granularity affects multitask performance.
- Time-resolved dynamical labeling: implement sliding-window LE/entropy estimation to capture within-trial nonstationarity; study transitions between regimes and their relation to MI performance.
- Verify that denoising preserves dynamical signatures: quantify the effect of the DAE (trained against bandpass-filter “clean” targets) on LE spectrum, KY dimension, entropy, and correlation dimension; compare against ICA/wavelet denoising and report distortions of task-relevant bands.
- Evaluate the impact of bandpass-filter-derived targets: test whether using bandpass-filtered signals as “clean” ground truth biases the learned representations or removes high-frequency features relevant to MI/dynamics.
- Provide principled augmentation design and validation: measure how jitter, scaling, time masking, and channel dropout alter LE, entropy, and MI accuracy; search augmentation policies that preserve dynamical variability while improving invariance.
- Mitigate multitask interference with formal analyses: quantify gradient conflicts (e.g., cosine similarity across task gradients) and test PCGrad, GradDrop, or uncertainty-based weighting vs. fixed loss weights (Ac, Ad, As); study scheduling strategies (e.g., curriculum) to reduce trade-offs.
- Compare staged vs end-to-end training: evaluate training the DAE jointly (fine-tuning) vs. strictly staged freezing; report effects on MI accuracy and chaos detection, and potential overfitting or leakage.
- Control for subject- and dataset-specific effects in contrastive sampling: assess class collision and subject bias in NT-Xent negatives; test subject-aware sampling or memory banks to reduce spurious contrasts.
- Resolve dataset/task mismatch: BNCI datasets contain only imagery, yet results include real vs imagery classification; clarify which datasets support this task, how splits were formed, and whether cross-dataset comparisons are valid.
- Standardize windowing across sampling rates: justify the fixed 320-point windows across datasets with different Hz; perform sensitivity analysis in seconds-based windows to ensure comparable temporal context.
- Report class distributions and calibration: provide class imbalance statistics for real vs imagery and chaos vs non-chaos; add calibration curves, ECE/MCE, and threshold analyses for binary decisions.
- Strengthen evaluation with statistical rigor: report confidence intervals, significance tests, and cross-subject variability; include balanced accuracy, AUC, and confusion matrices, not only accuracy/F1.
- Ensure apples-to-apples SOTA comparisons: re-run baselines under identical LOSO protocols, preprocessing, and hyperparameters on the same datasets; avoid cross-dataset metrics (e.g., BCI IV-2a) when not directly comparable.
- Detail architecture choices and ablations: specify Transformer depth, heads, positional encodings (time vs channels), and pooling; ablate CNN stem vs Transformer, channel-wise vs sequence-wise attention, and include graph-based baselines for spatial topology.
- Validate shPLRNN LE recovery on known systems: fit the model to canonical chaotic/periodic benchmarks (Lorenz, Rössler, logistic map) to test whether learned LEs match ground truth; report errors and reliability before applying to EEG.
- Explore richer dynamical features: incorporate KY dimension, entropy variants, recurrence quantification, and multiscale metrics as auxiliary targets or regularizers; study their incremental benefit and redundancy.
- Address preprocessing scaling effects: test z-scoring/robust scaling vs per-channel MinMax scaling to preserve inter-channel amplitude relationships important for spatial decoding.
- Guard against data leakage in overlapping windows: confirm non-overlapping train/val/test splits at subject and trial levels; quantify leakage risk and its impact on reported metrics.
- Analyze why MI improvements are marginal: perform error analysis by subject/session, frequency band contribution, and channel importance; test whether the dynamical head or contrastive loss helps specific cohorts (e.g., low-SNR).
- Link model decisions to neurophysiology: provide attention/saliency maps, bandpower contributions, and channel topology interpretations; correlate chaos predictions with known EEG rhythms (alpha/beta/gamma) and physiological states (eyes open/closed).
- Evaluate real-time feasibility: report inference latency, memory footprint, and throughput on typical BCI hardware; study streaming performance and adaptability to online settings.
- Expand to clinical validation: test the dynamical classification on epilepsy/Alzheimer’s datasets and assess whether chaos-related markers add diagnostic value beyond MI decoding.
- Release full reproducibility artifacts: publish code, preprocessed splits, random seeds, chaos labeling pipelines, and configuration files; provide a standardized benchmark to enable fair comparisons and replication.
Practical Applications
Practical Applications Derived from the Paper
The paper contributes a two-stage framework for noisy EEG: a denoising autoencoder (DAE) followed by a multitask CNN+Transformer that jointly performs motor imagery (MI) classification, chaos vs. non‑chaos discrimination via Lyapunov-exponent-derived labels, and self‑supervised contrastive learning (NT‑Xent with EEG-specific augmentations). Below are actionable applications that leverage these findings, organized by deployment horizon, with sector links, candidate tools/workflows, and key assumptions.
Immediate Applications
These can be prototyped or deployed now with available datasets, libraries (MNE, PyTorch), and commodity GPUs; they align with results shown on BCI2000 and BNCI Horizon 2020.
- * * - Bold headings indicate the application; each item includes sectors, potential tools/products/workflows, and assumptions/dependencies.
- Robust EEG denoising plug‑in for research and clinical workflows
- Sectors: Healthcare, Software (neurotech toolchains), Academia
- What: Integrate the 1D-conv DAE (SmoothL1 + SpectralLoss) as a preprocessing step to suppress artifacts while preserving task‑relevant spectra (delta–gamma). Improves low‑SNR trials and stabilizes downstream decoding.
- Tools/products/workflows: MNE/EEGLAB plug‑in; PyTorch/MNE wrapper; “clean‑stream” module for EDF/BCI2000/BNCI data; real‑time artifact suppression node in LabStreamingLayer pipelines.
- Assumptions/dependencies: Access to representative “clean target” filters during DAE training; similar channel layouts/sampling as training data or careful transfer learning.
- Faster, more reliable MI decoding for BCIs with reduced calibration
- Sectors: Healthcare (rehab, prosthetics), Assistive Tech, Gaming/AR/VR
- What: Deploy the CNN+Transformer backbone trained with light/full contrastive augmentations to improve cross‑subject MI decoding and reduce per‑user calibration time.
- Tools/products/workflows: BCI calibration wizard using contrastive pretraining on prior cohorts; edge inference service for MI decoding; SDK for prosthetic control/gaming.
- Assumptions/dependencies: Comparable headset/channel setups; robust augmentation policies (jitter, scaling, time masking, channel dropout) that preserve MI semantics.
- Semi‑supervised chaos labeling to bootstrap EEG datasets
- Sectors: Academia, Healthcare R&D, Neurotech
- What: Use LE‑derived labels (via GTF‑trained shPLRNN) and entropy measures (spectral, permutation) to auto‑tag chaotic vs. non‑chaotic trials, improving regularization and data efficiency.
- Tools/products/workflows: Batch labeling utility; active learning loop that prioritizes uncertain/discordant chaos tags; dataset curation scripts for LOSO benchmarks.
- Assumptions/dependencies: Stable RNN fitting for LE estimation; validated clustering thresholds for entropy features; agreement between pipelines in new domains.
- EEG acquisition quality control (QC) dashboard with dynamical metrics
- Sectors: Healthcare, Device QA, Academia
- What: Real‑time QC showing PSD, band powers, spectral/permutation entropy, and LE‑inspired indicators to flag noisy channels/sessions and guide re‑placement or re‑recording.
- Tools/products/workflows: Web dashboard (Python/FastAPI + Plotly) integrated with MNE; channel‑wise alerts; session acceptance criteria.
- Assumptions/dependencies: Access to streaming data and channel metadata; acceptable latency for computing entropy metrics online.
- Contrastive pretraining service for low‑label EEG labs and startups
- Sectors: Software, Academia, Neurotech
- What: Centralized NT‑Xent pretraining on institutional EEG corpora using EEG‑tailored augmentations, then provide encoder checkpoints for diverse downstream tasks (MI, workload, sleep).
- Tools/products/workflows: Model zoo with trained encoders; cookiecutter repo for fine‑tuning; data versioning with DVC; reproducible recipes.
- Assumptions/dependencies: Legal/data‑use approvals for pooled pretraining; augmentation policies that generalize across devices and tasks.
- Consumer EEG app upgrade: stable focus/relax metrics under real‑world noise
- Sectors: Consumer Health/Wearables, Wellness
- What: Embed the DAE and light‑contrastive encoder to stabilize attention/relaxation indices from low‑cost headsets (e.g., Muse, Emotiv).
- Tools/products/workflows: On‑device or mobile inference; SDK for partner apps; A/B testing against existing filters.
- Assumptions/dependencies: Portability to fewer channels and variable montage; mobile optimization.
- Driver drowsiness/attention monitoring with improved robustness
- Sectors: Automotive, Occupational Safety
- What: Adapt the multitask encoder (sans MI head) to attention/drowsiness labels; DAE + contrastive learning improves resilience to motion artifacts and electrode drift.
- Tools/products/workflows: In‑cabin EEG band integration; fleet‑level model updates; alerting middleware.
- Assumptions/dependencies: Availability of lightweight, comfortable EEG; regulatory and privacy approvals.
- Reproducible academic pipeline for dynamics‑aware EEG research
- Sectors: Academia
- What: Standardized codebase combining DAE preprocessing, MTL (MI + chaos), and SSL; promotes cross‑dataset LOSO comparisons and interpretability via dynamical signatures.
- Tools/products/workflows: Open‑source “Dyna‑EEG” template; documented configs; experiment tracking (Weights & Biases/MLflow).
- Assumptions/dependencies: Community adoption and contributions; dataset accessibility.
- Edge streaming artifact suppression for tele-neuro services
- Sectors: Telehealth, Remote Trials
- What: Deploy the DAE as a lightweight edge filter to stabilize streams in remote/home recordings before cloud analysis.
- Tools/products/workflows: gRPC microservice; ONNX/TensorRT export; bandwidth‑aware compression.
- Assumptions/dependencies: Sufficient on‑device compute; robust performance on diverse home environments.
Long-Term Applications
These require additional validation, scaling, cross‑device generalization, interpretability, and, in some cases, regulatory clearance.
- Chaos‑aware clinical decision support for epilepsy and other disorders
- Sectors: Healthcare
- What: Integrate LE‑informed features into seizure detection, prognosis, and subtyping; monitor disease progression or treatment response.
- Tools/products/workflows: Hospital PACS/EEG system plug‑in; clinician dashboard with dynamics summaries (e.g., KY dimension); longitudinal reports.
- Assumptions/dependencies: Large multi‑center validation; harmonization across devices/montages; clinician‑friendly explanations; regulatory clearance.
- Anesthesia depth and sedation monitoring using dynamical signatures
- Sectors: Healthcare (OR/ICU)
- What: Use chaos/non‑chaos dynamics and contrastively learned embeddings to track depth of anesthesia and detect adverse states.
- Tools/products/workflows: OR monitors with real‑time dynamics indices; alarms and decision support; integration with patient vital signs.
- Assumptions/dependencies: Prospective trials; artifact resilience in OR settings; interoperability with monitoring vendors.
- Adaptive, closed‑loop BCIs guided by brain state dynamics
- Sectors: Healthcare (neurorehab), Assistive Robotics, AR/VR
- What: Use the chaos head to gate/control adaptivity (e.g., recalibration schedules, control gains) for stable, long‑duration BCI control.
- Tools/products/workflows: Policy module that conditions decoding/noise augmentation on current dynamics regime; online learning loops.
- Assumptions/dependencies: Safe online adaptation; latency constraints; robustness to nonstationarities.
- Regulatory‑grade EEG preprocessing and representation stack
- Sectors: Medical Devices, Standards Bodies
- What: Standardize a DAE+SSL preprocessor with documented augmentations and performance claims as a certifiable module.
- Tools/products/workflows: Reference implementations; conformance tests; audit trails for preprocessing decisions.
- Assumptions/dependencies: Consensus benchmarks; reproducibility criteria; post‑market surveillance plans.
- TinyML/on‑device deployment for wearables and implants
- Sectors: Wearables, Neuroprosthetics
- What: Compress DAE and Transformer (pruning, distillation) for real‑time inference with limited power.
- Tools/products/workflows: Model compression toolchain; mixed‑precision kernels; streaming schedulers.
- Assumptions/dependencies: Accuracy retention under compression; efficient hardware support.
- Multimodal neuro-sensing with shared contrastive objectives
- Sectors: Healthcare, Human–Computer Interaction
- What: Extend the multitask contrastive pipeline to EEG+EMG+fNIRS, improving robustness and reducing label needs via cross‑modal positives.
- Tools/products/workflows: Multimodal encoders; synchronized augmentation policies; alignment losses beyond NT‑Xent.
- Assumptions/dependencies: Time‑aligned multimodal datasets; sensor fusion standards.
- Educational and workplace cognitive load monitoring with privacy guardrails
- Sectors: Education, Enterprise Wellness, Policy
- What: Use dynamics‑aware embeddings for mental workload/engagement; implement privacy‑preserving analytics (federated, DP).
- Tools/products/workflows: Opt‑in analytics platforms; dashboards for students/employees; federated SSL pretraining.
- Assumptions/dependencies: Ethical frameworks, consent, and governance; bias and fairness audits.
- Policy guidance and benchmarking for EEG data quality and reporting
- Sectors: Policy/Standards, Research Governance
- What: Recommend minimum reporting of preprocessing (denoising settings), augmentation policies, and dynamical metrics; require LOSO or cross‑site evaluations.
- Tools/products/workflows: Checklist for publications/device submissions; open benchmarks including dynamics labels.
- Assumptions/dependencies: Community buy‑in; maintenance of public benchmarks.
- Cross‑domain extension to other noisy nonlinear time series
- Sectors: Energy, Industrial IoT, Finance, Robotics
- What: Apply the DAE+MTL+contrastive template to grid stability, machine vibration, asset price microstructure, or robot sensor fusion where nonlinear dynamics and noise coexist.
- Tools/products/workflows: Domain‑specific denoisers; self‑supervised pretraining on unlabeled logs; anomaly/ regime classifiers.
- Assumptions/dependencies: Validity of augmentations and “dynamics labels” in each domain; domain shift handling; subject‑matter validation.
- Open “Dyna‑EEG” SDK and model zoo as community infrastructure
- Sectors: Academia, Startups, Open Science
- What: Sustained open-source suite with pretrained encoders, denoisers, labeling utilities (LE/entropy), and reproducible recipes for EEG tasks.
- Tools/products/workflows: Versioned checkpoints; documentation; CI for benchmarks; tutorial curricula.
- Assumptions/dependencies: Funding and maintenance; permissive licensing; continued dataset access.
Notes on common assumptions/dependencies across applications:
- Generalization across headsets and montages may require transfer learning and careful normalization.
- LE/entropy-derived labels assume reliable RNN fitting and stable entropy estimation; thresholds may be cohort/device‑specific.
- Contrastive augmentations must preserve task semantics; mis‑specified policies can suppress informative dynamics.
- Clinical deployments need interpretability, rigorous validation, and regulatory pathways.
- Privacy, ethics, and informed consent are critical for consumer, workplace, and educational uses.
Glossary
- Attractor: In dynamical systems, a set in state space toward which trajectories evolve (e.g., fixed points, limit cycles, chaotic sets). Example: "whether they exhibit a chaotic attractor or a non-chaotic pattern (periodic, quasiperiodic, or no attractor)"
- BNCI Horizon 2020: A suite of benchmark EEG datasets commonly used for motor imagery decoding research. Example: "BNCI Horizon 2020 (004/008/009), with 9 subjects, 22 channels, and a sampling rate of 250. Hz."
- Brain-Computer Interfaces (BCIs): Systems that translate brain signals into commands for computers or external devices. Example: "EEG-based Brain-Computer Interfaces (BCIs)"
- Common Spatial Pattern (CSP): A spatial filtering technique that maximizes variance differences between classes in EEG data. Example: "Common Spatial Pattern (CSP) filtering"
- Correlation dimension: A fractal dimension estimate used to characterize the geometric complexity of an attractor. Example: "Lyapunov exponents, correlation dimension, and entropy-based measures"
- Denoising Autoencoder (DAE): A neural network trained to reconstruct clean signals from noisy inputs, improving robustness. Example: "We employ a 1D convolutional Denoising Autoencoder (DAE), trained as a powerful unsupervised pretraining mechanism, to reconstruct clean EEG from noisy observations."
- EDF (European Data Format): A standard file format for storing physiological signals such as EEG. Example: "EDF-formatted EEG data is loaded via the MNE library"
- Electroencephalography (EEG): A technique for recording electrical activity of the brain via scalp electrodes. Example: "Electroencephalography (EEG) signals are widely used to study human brain activity"
- Generalized Teacher Forcing (GTF): A training scheme that blends teacher forcing and free-running to learn complex or chaotic dynamics in sequence models. Example: "with the Generalized Teacher Forcing (GTF) scheme [47]"
- Hellinger distance: A metric for quantifying similarity between probability distributions. Example: "including Hellinger distance, RMSE, Wasserstein distance, and MAE."
- Jacobian matrix: The matrix of partial derivatives that captures local sensitivity of a system’s state to perturbations. Example: "A key quantity in the analysis of DS is the Jacobian matrix, which captures the local sensitivity of the current state to perturbations in the previous state"
- Kaplan-Yorke (KY) dimension: An estimate of an attractor’s effective (fractal) dimensionality derived from the Lyapunov spectrum. Example: "the Kaplan-Yorke (KY) dimension is often employed [42]."
- Leave-One-Subject-Out (LOSO): A cross-validation protocol where one subject is held out for testing while others are used for training. Example: "robust cross-subject EEG motor imagery decoding under LOSO evaluation"
- Lyapunov Exponents (LEs): Quantities measuring exponential rates of divergence or convergence of nearby trajectories, indicating chaos or stability. Example: "Lyapunov Exponents (LEs) are fundamental quantities used to characterize the long- term behavior of trajectories"
- Lyapunov spectrum: The full set of Lyapunov exponents describing divergence rates along different directions in phase space. Example: "Likewise, the Lyapunov spectrum for Oz1 can also be calculated"
- MNE library: A Python toolkit for processing MEG/EEG data, including reading formats and preprocessing pipelines. Example: "EDF-formatted EEG data is loaded via the MNE library [43, 44]"
- Motor Imagery (MI): The mental rehearsal of movement without actual execution, often used in BCI tasks. Example: "Motor Imagery (MI) tasks"
- NT-Xent (Normalized Temperature-scaled Cross Entropy) loss: A contrastive learning objective that aligns representations of augmented views while separating negatives. Example: "NT-Xent (Normalized Temperature-scaled Cross Entropy) loss (Lcontrastive)"
- Permutation entropy: A complexity measure based on the diversity of ordinal patterns in time series data. Example: "We also computed permutation entropy, which captures the diversity of ordinal patterns in the time series"
- Power Spectral Density (PSD): The distribution of signal power across frequency components. Example: "Power Spectral Density (PSD), band-specific power, and wavelet decomposition"
- shPLRNNs (shallow Piecewise-Linear RNNs): Recurrent neural networks with piecewise-linear dynamics used to model and analyze system behavior (including Lyapunov exponents). Example: "We employed (clipped) shallow Piecewise-Linear RNNs (shPLRNNs)"
- SimCLR: A self-supervised contrastive learning framework that uses augmented pairs to learn invariant representations. Example: "Frameworks such as SimCLR [8] and MoCo [9]"
- Spectral entropy: Shannon entropy computed from the normalized power spectrum, reflecting signal irregularity. Example: "we extracted spectral entropy, that is, Shannon entropy of the normalized power spectrum"
- Wasserstein distance: An optimal transport-based metric measuring the distance between probability distributions. Example: "including Hellinger distance, RMSE, Wasserstein distance, and MAE."
Collections
Sign up for free to add this paper to one or more collections.