Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
36 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
38 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
4 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Propagating the prior from shallow to deep with a pre-trained velocity-model Generative Transformer network (2408.09767v1)

Published 19 Aug 2024 in physics.geo-ph, cs.AI, and physics.comp-ph

Abstract: Building subsurface velocity models is essential to our goals in utilizing seismic data for Earth discovery and exploration, as well as monitoring. With the dawn of machine learning, these velocity models (or, more precisely, their distribution) can be stored accurately and efficiently in a generative model. These stored velocity model distributions can be utilized to regularize or quantify uncertainties in inverse problems, like full waveform inversion. However, most generators, like normalizing flows or diffusion models, treat the image (velocity model) uniformly, disregarding spatial dependencies and resolution changes with respect to the observation locations. To address this weakness, we introduce VelocityGPT, a novel implementation that utilizes Transformer decoders trained autoregressively to generate a velocity model from shallow subsurface to deep. Owing to the fact that seismic data are often recorded on the Earth's surface, a top-down generator can utilize the inverted information in the shallow as guidance (prior) to generating the deep. To facilitate the implementation, we use an additional network to compress the velocity model. We also inject prior information, like well or structure (represented by a migration image) to generate the velocity model. Using synthetic data, we demonstrate the effectiveness of VelocityGPT as a promising approach in generative model applications for seismic velocity model building.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.