Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Transformer-based Network for Deformable Medical Image Registration (2202.12104v3)

Published 24 Feb 2022 in eess.IV, cs.CV, and cs.LG

Abstract: Deformable medical image registration plays an important role in clinical diagnosis and treatment. Recently, the deep learning (DL) based image registration methods have been widely investigated and showed excellent performance in computational speed. However, these methods cannot provide enough registration accuracy because of insufficient ability in representing both the global and local features of the moving and fixed images. To address this issue, this paper has proposed the transformer based image registration method. This method uses the distinctive transformer to extract the global and local image features for generating the deformation fields, based on which the registered image is produced in an unsupervised way. Our method can improve the registration accuracy effectively by means of self-attention mechanism and bi-level information flow. Experimental results on such brain MR image datasets as LPBA40 and OASIS-1 demonstrate that compared with several traditional and DL based registration methods, our method provides higher registration accuracy in terms of dice values.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yibo Wang (111 papers)
  2. Wen Qian (5 papers)
  3. Xuming Zhang (6 papers)
Citations (9)

Summary

The paper entitled "A Transformer-based Network for Deformable Medical Image Registration" focuses on enhancing the accuracy of deformable medical image registration, which is critical in clinical diagnosis and treatment. Traditional deep learning (DL) based methods, while fast, often fall short in accuracy due to their limited ability to effectively represent both global and local features of the images.

To address this limitation, the authors propose a novel method utilizing a transformer-based network. Transformers are renowned for their self-attention mechanism, which excels at capturing intricate dependencies in data. The proposed method leverages this mechanism to improve the extraction of both global and local features from the moving and fixed images. The network then uses these features to generate deformation fields, which in turn produce the registered image in an unsupervised manner.

Key highlights include:

  • Self-Attention Mechanism: Utilizing self-attention allows the model to weigh the importance of different parts of the image, thus enhancing the representation of features across varying spatial scales.
  • Bi-level Information Flow: This concept is employed to further refine the feature extraction process by ensuring that both high-level contextual information and fine-grained details are effectively captured throughout the network layers.
  • Experimental Validation: The effectiveness of the proposed method is validated using two brain MR image datasets: LPBA40 and OASIS-1. The results demonstrate that the transformer-based method achieves higher registration accuracy compared to several traditional and DL-based registration techniques, as measured by dice coefficient values.

In conclusion, this paper introduces a significant advancement in medical image registration by leveraging the capabilities of transformer architectures. The method's improved accuracy in feature extraction and the subsequent registration process holds promise for enhancing clinical application outcomes.