Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DMRF-UNet: A Two-Stage Deep Learning Scheme for GPR Data Inversion under Heterogeneous Soil Conditions (2205.07567v1)

Published 16 May 2022 in eess.SP and eess.IV

Abstract: Traditional ground-penetrating radar (GPR) data inversion leverages iterative algorithms which suffer from high computation costs and low accuracy when applied to complex subsurface scenarios. Existing deep learning-based methods focus on the ideal homogeneous subsurface environments and ignore the interference due to clutters and noise in real-world heterogeneous environments. To address these issues, a two-stage deep neural network (DNN), called DMRF-UNet, is proposed to reconstruct the permittivity distributions of subsurface objects from GPR B-scans under heterogeneous soil conditions. In the first stage, a U-shape DNN with multi-receptive-field convolutions (MRF-UNet1) is built to remove the clutters due to inhomogeneity of the heterogeneous soil. Then the denoised B-scan from the MRF-UNet1 is combined with the noisy B-scan to be inputted to the DNN in the second stage (MRF-UNet2). The MRF-UNet2 learns the inverse mapping relationship and reconstructs the permittivity distribution of subsurface objects. To avoid information loss, an end-to-end training method combining the loss functions of two stages is introduced. A wide range of subsurface heterogeneous scenarios and B-scans are generated to evaluate the inversion performance. The test results in the numerical experiment and the real measurement show that the proposed network reconstructs the permittivities, shapes, sizes, and locations of subsurface objects with high accuracy. The comparison with existing methods demonstrates the superiority of the proposed methodology for the inversion under heterogeneous soil conditions.

Citations (26)

Summary

We haven't generated a summary for this paper yet.