Papers
Topics
Authors
Recent
Search
2000 character limit reached

Seismic Signal Denoising Methods

Updated 4 February 2026
  • Seismic signal denoising is a suite of mathematical and computational methods that suppress noise and enhance signal clarity in geophysical data.
  • Techniques range from adaptive transform-based methods and sparse coding to deep learning frameworks that address complex, nonstationary noise.
  • Recent innovations achieve significant SNR gains and improved preservation of subtle seismic events, enabling more accurate field interpretations.

Seismic signal denoising refers to the set of mathematical, computational, and algorithmic methods used to suppress noise in seismic data, thereby enhancing signal-to-noise ratio (SNR) and preserving geophysically significant events for subsequent processing or interpretation. Modern approaches span adaptive, transform-based, sparse representation, deep learning, and diffusion modeling frameworks, each adapted to address the complex, nonstationary, and spatially heterogeneous noise environments found in field and synthetic datasets.

1. Noise Sources and Modeling in Seismic Data

Seismic noise includes random (i.i.d. or colored), band-limited, and structured (coherent) components. Sources of noise include ambient environmental fluctuations, instrument and coupling artifacts, anthropogenic and natural surface waves, and sensor-specific disturbances such as tube waves in vertical seismic profiling (VSP). The composite waveform is typically modeled as

x=s+n,x = s + n,

where xx is the observed data, ss is the true underlying signal, and nn is the noise, which may be random, correlated, or structured.

Random noise is often assumed additive and zero-mean, with independence between samples or traces, whereas coherent noise exhibits spatial, dip, frequency, or trace-wise correlation—necessitating more advanced suppression techniques. Field datasets may include multiple noise types, and the nonstationarity of both signal and noise poses significant challenges to robust denoising algorithms (Birnie et al., 2021, Iqbal et al., 2018, Slang et al., 2024).

2. Classical and Sparse Representation Methods

Empirical Mode Decomposition (EMD) and Weighted-IMF Denoising

EMD is a data-driven, adaptive transformation that decomposes seismic traces into zero-mean Intrinsic Mode Functions (IMFs) and a residual. Weighted-IMF reconstruction assigns signal-to-noise adaptive weights wkw_k to each IMF rather than hard-selecting noise-free modes, yielding the denoised signal

xdenoised(t)=k=1KwkIMFk(t)+rK(t).x_{\mathrm{denoised}}(t) = \sum_{k=1}^K w_k\,\mathrm{IMF}_k(t) + r_K(t).

Weights are typically determined from mode energy or correlation, e.g., wk=max(0,1Enoise/Ek)w_k = \max(0, 1 - E_{\mathrm{noise}}/E_k), with EkE_k the energy of the kk-th IMF, or by a soft-threshold strategy. This approach can yield SNR gains of 3–7 dB over standard EMD, with improved preservation of low-amplitude signal content (Jin, 2019).

Curvelet Transform and Blind Thresholding

The curvelet transform provides optimal sparsity for seismic events with directional and frequency localization, making it well-suited to suppressing both random and coherent noise. In the presence of correlated noise, ZCA whitening equalizes variance across curvelet subbands, restoring thresholding efficiency. Noise variance is estimated empirically from signal-free patches. Hard thresholding is then applied in the transform domain, with adaptive thresholds at each scale and orientation. This method produces PSNR gains of 10–15 dB over wavelet or EMD denoising and removes ground roll and colored noise robustly (Iqbal et al., 2018).

Sparse and Dictionary-based Coding

Dictionary-learning methods, such as 2D sparse coding (2DSC), double-sparsity dictionaries, and convolutional sparse coding (CSC), exploit the redundancy and structure in seismic data. In 2DSC, 3D seismic volumes are modeled as third-order tensors and denoised through tensor-linear combinations using learned overcomplete dictionaries. Alternating minimization algorithms, with soft-thresholding sparsity constraints, yield higher SNR and computational efficiency compared to traditional K-SVD or transform domain methods (Su et al., 2017). CSC further improves performance by jointly learning convolutional filters and sparse codes, leveraging global signal modeling to maintain higher PSNR and SSIM, especially on data with large gaps or high noise (Almadani et al., 2024). Double-sparsity dictionary learning, with mask-aware optimization, permits joint interpolation and denoising, surpassing fixed-basis transforms, particularly for missing or irregularly-sampled traces (Zhu et al., 2017).

3. Supervised Deep Learning Approaches

CNN- and U-Net-based Regression

Deep convolutional neural networks, including U-Net and ResNet-style architectures, are trained on noisy-clean pairs or synthetic/augmented data. Supervised networks learn a mapping fθ(x)sf_\theta(x) \approx s or fθ(x)nf_\theta(x) \approx n (predicting either signal or noise). Representations encoding spatial and temporal coherence—such as multi-level skip connections, dilated convolutions, and attention modules—enable robust denoising across diverse noise types.

Key technical advances:

  • Multi-input strategies (e.g., adjacent gathers) increase contextual information, minimizing leakage.
  • Residual-learning approaches (e.g., DR-Unet) predict only the noise component, facilitating better amplitude and signal continuity preservation than direct signal regression (Ma et al., 2023).
  • Networks are generally trained using MSE, MAE, and/or physics-informed losses, with batch normalization and data augmentation used to avoid overfitting.

Performance metrics such as SNR gains (>>8–16 dB), PSNR, SSIM, and local similarity are standard. Supervised methods outperform classical filtering, but may show limited generalizability on field data when trained exclusively on synthetic labels, motivating the use of transfer learning and hybrid pretraining protocols (Slang et al., 2024, Shrivastava et al., 2023, Barros et al., 2024).

Diffusion Model-based Denoising

DDPMs and accelerated solvers parameterize denoising as a reverse stochastic process, learning either to predict the noise or the clean signal via iterative refinement. Training can be undertaken directly on field noise (pre-arrival windows, “noise as label”) to capture the true distribution of noise.

Architectures combine attention, U-Net backbones, and residual blocks, and leverage accelerated solvers (e.g., DPM-Solver, DDIM) for practical inference speed. Explicit noise modeling (vs. signal-based approaches) exhibits improved SNR and lower signal leakage, especially in low-SNR regimes (<<0 dB) and with complex field-captured noise. Reported results include SSIM up to 0.92 and SNR improvements exceeding 5 dB over frequency-wavenumber filtering (Zhu et al., 1 Mar 2025, Zhu et al., 3 Sep 2025).

4. Self-Supervised and Mask-based Innovations

Blind-Spot and Self2Self Learning

Self-supervised blind-spot networks do not require noisy-clean pairs. Pixels or traces are masked or replaced (“corrupted”) to create training targets, relying on the premise that noise is unpredictable from the neighborhood while signal is spatially coherent. These architectures match or surpass transform-based denoisers for i.i.d. noise and are efficient to train and deploy (<<7 min training, <<50 ms inference per line) (Birnie et al., 2021, Xu et al., 2022).

Blind-mask extensions use mask patterns informed by either prior knowledge of noise directionality or explainable AI (XAI)-derived Jacobian matrices. This enables automated, data-adaptive design of masks that occlude noise-dominated pixels while preserving signal context (Birnie et al., 2023).

Transfer Learning and Domain Adaptation

Supervised base-training on lightweight synthetic datasets, followed by fine-tuning with blind-spot masking on field data, achieves a balance between noise removal and signal preservation that self-supervised or supervised methods alone cannot match. This hybrid strategy is computationally efficient and generalizes well to new geologies and noise regimes (Birnie et al., 2022).

Hybrid Regularization: S2S-WTV

Self2Self with Weighted Total Variation regularization hybridizes CNN expressiveness with interpretable total variation smoothness priors. The framework leverages trace-wise masking, alternating direction multiplier optimization, and dynamic spatial regularization to balance edge preservation and noise attenuation. S2S-WTV achieves state-of-the-art performance in random noise suppression (PSNR, SSIM) on both synthetic and field data, with runtime competitive with supervised deep learning (Xu et al., 2022).

5. Advanced Benchmarking, Evaluation, and Practical Considerations

Benchmark Datasets and Metrics

Recent benchmark datasets (e.g., real swell-noise corpus with synthetic base signals) support reproducible comparison of denoisers across noise types and SNRs (Barros et al., 2024). New metrics such as relative SNR² (noise removal relative to truth) and standard measures (PSNR, SSIM, local similarity, cross-correlation) are critical for detecting residual signal loss and over-smoothing.

Challenges in Real-World Scenarios

  • Over-denoising (signal leakage) and under-denoising (residual noise) remain concerns, especially for spatially varying or coherent noise fields (Yang et al., 2024, Shrivastava et al., 2023).
  • For data with uneven or spatially complex noise, adaptive graded denoising protocols integrate global and local BM3D-based filtering, similarity segmentation, and region-specific noise estimation to avoid artifacts and preserve subtle structure (Yang et al., 2024).
  • Computational efficiency is advanced by model architecture choices (e.g., direct residual inputs, removal of pooling/stride layers in large-volume CNNs), batch and data domain selection, and accelerated solvers for large 3D data.

Open Problems

  • Robustness to previously unseen, nonstationary, or mixed noise types (e.g., tube waves with overlapping signal spectra) remains imperfect in both supervised and self-supervised DNNs (Zhu et al., 3 Sep 2025, Zhu et al., 1 Mar 2025).
  • Objective mask or threshold selection in mask-based self-supervised frameworks often requires either expert interpretation or additional validation data (Birnie et al., 2023).
  • Joint denoising and super-resolution remain active areas: multi-scale GANs with attention and explicit edge-preserving loss functions currently define the performance frontier in post-stack settings (Yu et al., 2024).

6. Impact and Future Directions

Recent seismic denoising research trends emphasize integration of data-driven, self-tuned, and interpretable models, moving from rigid analytical transforms and manual parameterization to hybrid and adaptive learning. High-performing frameworks combine multi-scale representation, attention to geological structure, and domain-knowledge-inspired regularization. Explicit noise modeling (diffusion-based, self-supervised masking) has been shown to outperform label-based models in field deployments, particularly under challenging SNR conditions and in the absence of reliable clean labels (Zhu et al., 3 Sep 2025, Xu et al., 2022).

Directions for future work include:

  • Development of 3D and multi-component extensions;
  • Joint inversion-denoising and physics-informed neural operators;
  • Automated adaptive masking and domain adaptation;
  • Benchmarking using more complex real-world labelings and event preservation metrics;
  • Further reduction in inference and training wall-time for truly real-time field workflows.

A plausible implication is that the next generation of seismic denoisers will rely on multi-objective frameworks capable of optimizing physical SNR, structural integrity, and interpretability requirements simultaneously, adaptable to evolving field conditions without manual tuning or retraining.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (17)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Seismic Signal Denoising.