Papers
Topics
Authors
Recent
2000 character limit reached

RSVD-WTNNR-iALM: Low-Rank FWI Regularization

Updated 2 December 2025
  • The paper presents RSVD-WTNNR-iALM as an innovative framework integrating rSVD and WTNNR to suppress noise in high-resolution full-waveform inversion.
  • It employs weighted truncated nuclear norm regularization to selectively penalize tail singular values, thereby improving edge preservation and model accuracy.
  • The method leverages an inexact augmented Lagrangian approach to achieve two- to three-fold convergence speedup, validated on challenging seismic datasets.

RSVD-WTNNR-iALM is a computational framework designed to address the challenges of high-resolution, noise-resilient full-waveform inversion (FWI) in seismic imaging. This method synergistically combines randomized singular value decomposition (rSVD) with weighted truncated nuclear norm regularization (WTNNR) and embeds them within an inexact augmented Lagrangian method (iALM) optimizer. The objective is to suppress random noise and improve image fidelity in the inversion of subsurface velocity models from complex seismic data, all while accelerating convergence by leveraging efficient, low-rank matrix factorizations and robust optimization strategies (Li et al., 2023).

1. FWI Formulation and Regularization

FWI seeks to recover a discretized velocity model mRn\mathbf{m} \in \mathbb{R}^n from observed seismic data dobsRm\mathbf{d}_{\text{obs}} \in \mathbb{R}^m, by minimizing a misfit functional:

minΔm  J(Δm)=12F(mk+Δm)dobs22+λR(Δm),\min_{\Delta \mathbf{m}} \; J(\Delta \mathbf{m}) = \frac{1}{2} \|F(\mathbf{m}_k + \Delta \mathbf{m}) - \mathbf{d}_{\text{obs}}\|_2^2 + \lambda R(\Delta \mathbf{m}),

where F()F(\cdot) is the forward modeling operator and R(Δm)R(\Delta \mathbf{m}) is a regularization term. The variable Δm\Delta \mathbf{m} denotes the velocity increment at Gauss–Newton step kk. In the frequency-domain linearization, the least-squares subproblem becomes:

minΔm  12JkΔm+rk22+λR(Δm),\min_{\Delta \mathbf{m}} \; \frac{1}{2} \|J_k \Delta \mathbf{m} + \mathbf{r}_k\|_2^2 + \lambda R(\Delta \mathbf{m}),

where JkJ_k is the Jacobian, and rk=F(mk)dobs\mathbf{r}_k = F(\mathbf{m}_k) - \mathbf{d}_{\text{obs}}.

The increment vector ΔmRpq\Delta \mathbf{m} \in \mathbb{R}^{pq} (with pq=np \cdot q = n) is reshaped as a matrix ΔM\Delta M, on which low-rank regularization is imposed. WTNNR encourages this low-rank structure, aiding denoising and structure preservation.

2. Weighted Truncated Nuclear Norm Regularization and rSVD

WTNNR applies a regularization penalty to the tail singular values of the velocity update matrix. The formal definition is:

X,rW=i=r+1min(p,q)wiσi(X),\|X\|_{*,r}^W = \sum_{i=r+1}^{\min(p,q)} w_i \sigma_i(X),

where {σi(X)}\{\sigma_i(X)\} are ordered singular values and {wi}\{w_i\} are positive adaptive weights, typically

wi=βσi()+εw_i = \frac{\beta}{\sigma_i^{(\ell)} + \varepsilon}

at iteration \ell with β>0\beta>0 and ε1\varepsilon \ll 1. This form penalizes only the singular values beyond rank rr, promoting selective truncation and adaptive denoising.

For computational efficiency, rSVD is employed to approximate the SVD of large matrices:

  1. Draw a Gaussian test matrix ΩRq×k\Omega \in \mathbb{R}^{q \times k};
  2. Compute Y=ΔMΩY = \Delta M \Omega and perform QR decomposition Y=QRY = QR;
  3. Form B=QTΔMB = Q^T \Delta M and compute its SVD B=U~ΣVTB = \tilde{U} \Sigma V^T;
  4. Approximate ΔM(QU~)ΣVT\Delta M \approx (Q\tilde{U}) \Sigma V^T.

The truncation rank rr is chosen adaptively, typically tracking the dominant singular values.

3. Inexact Augmented Lagrangian Method (iALM)

iALM introduces an auxiliary variable ZZ to decouple the nuclear norm regularization from the quadratic misfit term, under the constraint ΔM=Z\Delta M = Z. The augmented Lagrangian is:

Lμ(ΔM,Z,Y)=12JΔm+r22+λZ,rW+Y,ΔMZ+μ2ΔMZF2,L_\mu(\Delta M, Z, Y) = \frac{1}{2} \|J \Delta m + r \|_2^2 + \lambda \|Z\|_{*,r}^W + \langle Y, \Delta M - Z \rangle + \frac{\mu}{2}\|\Delta M - Z\|_F^2,

with dual variable YY and penalty parameter μ\mu.

Each iALM iteration involves:

  1. (Inexact) minimization with respect to ΔM\Delta M (typically using Gauss–Newton or conjugate gradient steps);
  2. Closed-form update for ZZ via weighted singular value thresholding (SVT), accelerated by rSVD;
  3. Dual variable YY update;
  4. Optional increase of μ\mu.

Convergence criteria are based on primal and dual residuals: ΔM<sup>t+1</sup>Z<sup>t+1F</sup>ϵprimal,  Z<sup>t+1</sup>Z<sup>tF</sup>ϵdual |\Delta M<sup>{t+1}</sup> - Z<sup>{t+1}|_F</sup> \leq \epsilon_{\text{primal}}, \; |Z<sup>{t+1}</sup> - Z<sup>t|_F</sup> \leq \epsilon_{\text{dual}} .

4. Algorithmic Workflow

The complete RSVD-WTNNR-iALM FWI proceeds iteratively within a multiscale frequency-stepping scheme:

  • Initialize velocity increment, auxiliary variable, dual variable, and penalty parameter.
  • Repeat iALM steps for the current frequency band:

    1. Form Jacobian and residual;
    2. Update ΔM\Delta M;
    3. Compute rSVD of ΔMnew+Y/μ\Delta M^{\text{new}} + Y/\mu;
    4. Apply weighted SVT on singular values for ZZ update;
    5. Update YY and μ\mu.
  • Upon convergence, update the model and proceed to the next frequency band.

Parameter choices include ramping rr from a small value at low frequency up to approximately 50, weights wiw_i with β[0.5,2.0]\beta \in [0.5, 2.0], ε=106\varepsilon = 10^{-6}, initial μ0103\mu_0 \sim 10^{-3}, multiplier ρ=2\rho=2, and tolerances ϵprimal,ϵdual104\epsilon_{\text{primal}}, \epsilon_{\text{dual}}\sim 10^{-4}.

5. Numerical Performance and Empirical Results

On the 2004 BP salt model under signal-to-noise ratios of 8, 12, and 16 dB, RSVD-WTNNR-iALM demonstrates:

  • Approximately twofold reduction in misfit per iteration compared to Tikhonov-regularized FWI.
  • Final model error (mtruemest2/mtrue2\|\mathbf{m}_{\text{true}} - \mathbf{m}_{\text{est}}\|_2 / \|\mathbf{m}_{\text{true}}\|_2) of 30–50% lower at 8 dB and 15–25% lower at 16 dB after 2000 iterations.
  • Enhanced recovery of deep salt plume structures and sharper boundary delineation relative to traditional FWI.
  • Reduced RMS profile error in velocity (approximately 0.1 km/s versus 0.2 km/s).
  • The low-rank truncation robustly removes random noise-induced features in Δm\Delta m, reducing the likelihood of overfitting.

In practice, the iALM approach achieves a two- to three-fold speedup in outer-loop convergence compared to fixed-penalty ALM.

6. Computational and Methodological Significance

RSVD-WTNNR-iALM integrates:

  • rSVD for dimensionality reduction, scaling SVD costs to O(pqk)O(pqk);
  • WTNNR for adaptive, data-dependent regularization, improving denoising and edge retention;
  • A multi-block iALM strategy for robust, accelerated convergence despite inexact subproblem solutions.

This unified framework provides a resilient, high-resolution, and noise-suppressing alternative to conventional FWI regularization techniques, particularly under challenging noise conditions and limited prior information about subsurface models (Li et al., 2023).

7. Context and Implications

The RSVD-WTNNR-iALM approach demonstrates that targeted low-rank penalization—coupled with randomized linear algebra and advanced augmented Lagrangian solvers—provides substantial advantages for inverse problems plagued by noise and ill-posedness. Its algorithmic structure facilitates scalability to large seismic data volumes and suggests a general template for integrating efficient matrix factorization and adaptive regularization within iterative PDE-constrained optimization frameworks. A plausible implication is that this strategy can be generalized to other large-scale imaging and signal recovery problems where low-rank structure, efficiency, and noise resilience are critical (Li et al., 2023).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to RSVD-WTNNR-iALM.