Papers
Topics
Authors
Recent
2000 character limit reached

AutoMAC-MRI: MRI Artifact Grading

Updated 20 December 2025
  • The paper introduces AutoMAC-MRI, a novel framework that uses supervised contrastive learning to grade motion artifacts in brain MRI into three actionable levels.
  • It leverages a grade-specific affinity scoring mechanism (MoGrAS) to quantify slice proximity to motion artifact prototypes, enhancing result interpretability.
  • Empirical results demonstrate improved accuracy and reduced false positives compared to fully supervised methods, supporting real-time clinical quality control.

AutoMAC-MRI is an interpretable slice-wise framework for automated detection and severity assessment of motion artifacts in brain magnetic resonance imaging (MRI). Designed to address clinical workflow needs, AutoMAC-MRI provides fine-grained, actionable grading of motion artifacts—moving beyond binary diagnostic/nondiagnostic labels toward a transparent severity scale. Leveraging supervised contrastive learning to build a discriminative image representation aligned with expert-judged motion grades, the framework introduces a grade-specific affinity scoring mechanism that quantifies slice proximity to each motion artifact category, supporting real-time, interpretable MRI quality control across heterogeneous MR sequences and orientations (Jerald et al., 17 Dec 2025).

1. Clinical Motivation and Problem Formulation

Motion during MRI acquisition—arising from voluntary patient movement, breathing, or physiological pulsations—induces blurring, ringing, and ghosting artifacts that compromise structural and functional assessment. Such artifacts can skew downstream volumetric or functional measurement, frequently necessitating scan reacquisition. Existing automated quality control methods are predominantly binary, outputting “diagnostic” versus “nondiagnostic” labels via convolutional neural networks (CNNs) trained with cross-entropy loss. However, clinical technologists require a graded assessment (no motion, subtle, severe) to inform scan reacquisition decisions. Even multi-class approaches to artifact grading typically employ synthetic augmentation or per-contrast exemplar matching, limiting domain generalization and offering little interpretability regarding grade assignment confidence or feature separability. AutoMAC-MRI addresses these limitations by (1) enabling three-grade assessments aligned with actionable clinical categories and (2) providing transparent, interpretable quantification of artifact severity per case (Jerald et al., 17 Dec 2025).

2. Architectural Overview and Training Protocol

AutoMAC-MRI operates in two consecutive stages:

Stage 1: Feature Representation via Supervised Contrastive Learning

  • Backbone: An ImageNet-pretrained ResNet-18, augmented with two fully connected layers (each 512 units), forms the encoder. The last 512-D feature, ziz_i, encapsulates a slice-level embedding.
  • Supervised Contrastive Loss: Following Khosla et al. (2020), embeddings of slices sharing a motion grade are encouraged to cluster, while embeddings from distinct grades are repelled within the latent space. For a minibatch of NN samples with expert-provided grades yi{1,2,3}y_i \in \{1,2,3\} (1: No Motion, 2: Subtle, 3: Severe), the supervised contrastive loss per sample is:

Li=1P(i)pP(i)logexp(zizp/τ)aA(i)exp(ziza/τ)L_i = -\frac{1}{|P(i)|} \sum_{p \in P(i)} \log \frac{\exp(z_i \cdot z_p / \tau)}{\sum_{a \in A(i)} \exp(z_i \cdot z_a / \tau)}

with τ=0.07\tau=0.07 and all zz normalized to unit length. Data augmentations follow SimCLR protocols including random resized crop, horizontal flip, color jitter, random grayscale, and Gaussian blur.

Stage 2: Motion Grade Classifier

  • The ResNet-18 encoder and the projection head are frozen.
  • A shallow MLP (512 input → 3 output units) is appended and trained with cross-entropy loss to predict discrete artifact grades from the learned embeddings.

This pipeline is trained on 5,304 expert-annotated adult brain MRI slices (T1-w, T2-w, PD-w, FLAIR; axial, coronal, sagittal, oblique). The dataset is stratified to preserve MR contrast and orientation diversity. Standard preprocessing includes resizing to 224×224224 \times 224 and normalization to ImageNet statistics.

3. Grade-Specific Affinity Scores (MoGrAS) and Decision Rules

In addition to predicting a discrete motion grade, AutoMAC-MRI generates a Motion Grade Affinity Score (MoGrAS) for each severity level. For each grade k{1,2,3}k \in \{1,2,3\}, a prototype vector ckR512c_k \in \mathbb{R}^{512} is computed as the median embedding of all training slices assigned grade kk. At inference, the affinity AkA_k of a test embedding ztestz_{\text{test}} to prototype ckc_k is defined by the cosine similarity:

Ak=cos(ztest,ck)=ztestTckztestckA_k = \cos(z_{\text{test}}, c_k) = \frac{z_{\text{test}}^T c_k}{\|z_{\text{test}}\| \, \|c_k\|}

Because all embeddings are 2\ell_2-normalized, Ak[1,+1]A_k \in [-1, +1]. The model’s final grade assignment is k=argmaxkAkk^* = \arg\max_k A_k, though results are nearly identical to the MLP classifier’s output when the embedding space is well-separated. The MoGrAS vector (A1,A2,A3)(A_1,A_2,A_3) provides explicit model conviction for each severity tier. For instance, a slice exhibiting (0.12, 0.75, 0.35) as MoGrAS distinctly supports grade 2 (Subtle), indicating proximity to grade 3 (Severe)—thereby flagging equivocal cases for further review.

4. Experimental Protocol and Quantitative Results

Dataset and Procedures

A single MRI specialist (over 10 years of experience) graded each slice into three actionable categories. The full breakdown encompasses:

  • Contrasts: T1-w (1,879), T2-w (646), PD-w (1,135), FLAIR (1,645).
  • Orientations: axial (3,183), coronal (1,804), sagittal (313), oblique (4).
  • Data Split: 2,552 train / 478 validation / 2,274 test slices.

Training used Adam optimizer (lr=1e-4\text{lr}=1e\text{-}4, weight decay 1e-41e\text{-}4) and batch size of 128 for Stage 1 (100 epochs); Stage 2 used Adam (lr=1e-3\text{lr}=1e\text{-}3), batch size 64, 50 epochs. Experiments ran on dual NVIDIA V100 GPUs.

Performance Metrics

  • Overall Accuracy (Test, 2,274 slices):
    • Supervised Contrastive + MLP: 84.0%
    • Fully supervised 3-class ResNet-18/MLP: 83.2%
    • SimCLR + MLP: 68.2%
  • Class-Specific Metrics (SupCon+MLP vs Fully Supervised):
Metric SupCon+MLP Fully Supervised
Precision (No Motion) 95.1% 87.4%
Recall (Severe) 94.0% 86.8%

Confusion matrices highlight that supervised contrastive pretraining reduces false positives on clean scans and improves severe artifact identification. Violin plots of MoGrAS values stratified by ground truth confirm monotonic score alignment: the median A3A_3 for severe-motion slices is approximately 0.85; for subtle, approximately 0.4; for no motion, approximately 0.1. The Spearman rank correlation between MoGrAS and expert ratings exceeds 0.8.

5. Qualitative and Interpretability Analyses

t-SNE visualizations of the 512-D embedding space show three distinct clusters corresponding to the motion grades for the SupCon-trained encoder. By contrast, SimCLR (self-supervised contrastive) embeddings are intermixed across grades, and fully supervised cross-entropy models exhibit partial overlap between the subtle and severe motion categories. Example MR slices annotated with (predicted grade, MoGrAS triplet) provide interpretable evidence mapping, e.g., high A3A_3 for slices with pronounced ringing, high A1A_1 for artifact-free images, and intermediate vectors for ambiguous/borderline cases.

MoGrAS vectors support real-time, actionable interpretability: abrupt changes in A1A_1 (decreasing) or A3A_3 (increasing) across slices signal progressive motion, enabling immediate technologist intervention. The interpretability granularity enables prioritization of marginal cases for manual verification versus routine rescanning decisions.

6. Integration into Clinical Workflow and Limitations

By delivering MoGrAS for each acquired slice in real time, AutoMAC-MRI facilitates inline MRI quality assurance. The gradated affinity output supplies nuance not available from conventional hard-labeling schemes, assisting in reducing unnecessary rescans and streamlining decision-making. Labels derived from a single specialist at one site suggest the importance of future validation with multi-rater, multi-center datasets for improved generalizability and reliability. Anticipated methodological research includes artifact localization (e.g., Grad-CAM applied to the encoder), expansion from 2D slicewise to 3D volumetric grading, and semi-supervised extensions to accommodate large-scale unlabeled data resources (Jerald et al., 17 Dec 2025).

7. Summary of Key Equations and Methodological Distinctions

The principal equations governing AutoMAC-MRI’s approach are:

  • Supervised Contrastive Loss:

Lcon=1Ni=1N[1P(i)pP(i)logexp(zizp/τ)aA(i)exp(ziza/τ)]L_{\mathrm{con}} = \frac{1}{N} \sum_{i=1}^N \Bigg[ -\frac{1}{|P(i)|} \sum_{p \in P(i)} \log \frac{\exp(z_i \cdot z_p / \tau)}{\sum_{a \in A(i)} \exp(z_i \cdot z_a / \tau)} \Bigg]

  • Motion Grade Affinity Score (MoGrAS):

Ak=ztestTckztestck[1,1]A_k = \frac{z_{\mathrm{test}}^T c_k}{\|z_{\mathrm{test}}\|\,\|c_k\|} \in [-1,1]

AutoMAC-MRI unifies a ResNet-18 encoder trained with supervised contrastive learning and grade-prototype cosine affinity scoring to offer accurate, interpretable, and contrast-agnostic motion artifact grading. Its architecture and output paradigm enable actionable, transparent quality control in clinical MRI workflows (Jerald et al., 17 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to AutoMAC-MRI.