Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
86 tokens/sec
Gemini 2.5 Pro Premium
43 tokens/sec
GPT-5 Medium
19 tokens/sec
GPT-5 High Premium
30 tokens/sec
GPT-4o
93 tokens/sec
DeepSeek R1 via Azure Premium
88 tokens/sec
GPT OSS 120B via Groq Premium
441 tokens/sec
Kimi K2 via Groq Premium
234 tokens/sec
2000 character limit reached

Alternating Learning Approach for Variational Networks and Undersampling Pattern in Parallel MRI Applications (2110.14703v1)

Published 27 Oct 2021 in eess.IV, cs.LG, and eess.SP

Abstract: Purpose: To propose an alternating learning approach to learn the sampling pattern (SP) and the parameters of variational networks (VN) in accelerated parallel magnetic resonance imaging (MRI). Methods: The approach alternates between improving the SP, using bias-accelerated subset selection, and improving parameters of the VN, using ADAM with monotonicity verification. The algorithm learns an effective pair: an SP that captures fewer k-space samples generating undersampling artifacts that are removed by the VN reconstruction. The proposed approach was tested for stability and convergence, considering different initial SPs. The quality of the VNs and SPs was compared against other approaches, including joint learning methods and VN learning with fixed variable density Poisson-disc SPs, using two different datasets and different acceleration factors (AF). Results: The root mean squared error (RMSE) improvements ranged from 14.9% to 51.2% considering AF from 2 to 20 in the tested brain and knee joint datasets when compared to the other approaches. The proposed approach has shown stable convergence, obtaining similar SPs with the same RMSE under different initial conditions. Conclusion: The proposed approach was stable and learned effective SPs with the corresponding VN parameters that produce images with better quality than other approaches, improving accelerated parallel MRI applications.

Citations (16)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.