Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 83 tok/s
Gemini 2.5 Pro 34 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 21 tok/s Pro
GPT-4o 130 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 460 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

LongiMam: Deep Learning for Breast Cancer Risk

Updated 30 September 2025
  • LongiMam is an end-to-end deep learning model that integrates current and prior mammograms for breast cancer risk prediction.
  • It employs a composite convolutional-recurrent architecture to extract spatial features and temporal biomarkers such as asymmetries and density changes.
  • Empirical results reveal that including up to four prior exams improves AUC metrics, enhancing risk stratification in challenging clinical populations.

LongiMam is an end-to-end deep learning model specifically developed for improving breast cancer risk prediction through the utilization of sequential mammographic imaging. Distinct from prior approaches that commonly rely on a single (typically the most recent) mammogram, LongiMam is designed to integrate both current and up to four prior mammograms. The central innovation is the joint modeling of spatial and temporal imaging variations via a composite convolutional-recurrent architecture. This enables extraction of subtle imaging biomarkers, including temporal asymmetries and density alterations, that are predictive of cancer risk in population-based screening scenarios marked by highly imbalanced outcomes and heterogeneous cohort follow-up. The LongiMam model and its empirical results are presented in "The LongiMam model for improved breast cancer risk prediction using longitudinal mammograms" (Rakez et al., 23 Sep 2025).

1. Model Architecture

LongiMam employs a multi-component structure that fuses spatial encoding by convolutional neural networks (CNNs) with sequence modeling by gated recurrent units (GRUs).

CNN Backbone and Projector:

The spatial module processes each image through a stack of six ConvBlock units, where each ConvBlock consists of a 3×3 convolution, batch normalization, and ReLU activation, followed by 2×2 max pooling. After the sixth block, an additional ConvBlock is applied to yield a feature map of (Height/64 × Width/64 × 256). Subsequently, a 1×1 convolution projects this feature map to 128 channels, and a final max pooling operation reduces it to a 1×128 vector per image.

Temporal Modeling with RNNs:

Separate GRU blocks are trained for each mammographic view—craniocaudal (CC) and mediolateral oblique (MLO). For each visit, the feature vectors from left and right breasts (for a given view) are subtracted element-wise to emphasize asymmetry, producing a vector sequence for each view. The GRU processes this sequence in a many-to-one fashion, outputting a 1×128 summary for each view.

Classification Module:

The outputs of the CC and MLO GRU blocks are concatenated into a 1×256 vector. This is fed sequentially to three dense (fully connected) layers with 128, 32, and 1 node(s), respectively. The final activation is sigmoid, yielding a cancer probability. Training optimizes a sample-level, case-to-control ratio-adjusted binary cross-entropy loss:

lc(x,y)=wc[ylogσ(x)+(1y)log(1σ(x))]l_c(x, y) = -w_c \left[ y \log \sigma(x) + (1 - y) \log (1 - \sigma(x)) \right]

where σ(x)\sigma(x) is the sigmoid activation.

2. Data Utilization and Input Design

LongiMam explicitly leverages longitudinal mammography by structuring model input as temporally ordered exam sequences.

  • Input Structure: Each screening episode consists of four images: CC and MLO views for each breast.
  • Temporal Integration: Up to four prior negative (cancer-free) exams are included with the current exam, ordered in reverse time (most recent prior is "Prior 1").
  • Modeling Scenarios:
    • Current only (1C): Most recent exam informs the risk prediction.
    • Priors + Current Visit (e.g., 1P1C, 2P1C, 3P1C, 4P1C): Historical exams augment the current image.
    • Priors only (e.g., 1P, 2P, 3P, 4P): Predictive value derived purely from temporal evolution without the current image.
  • This structure permits systematic analysis of the incremental benefit provided by historical imaging data for individual risk stratification.

3. Model Performance and Evaluation Metrics

Model performance is rigorously quantified via the area under the receiver operating characteristic curve (AUC), with 95% confidence intervals derived from bootstrap resampling.

  • Single-visit Baseline: Using only the most recent ("current") exam, the optimal training configuration achieves AUC = 0.742 (95% CI: 0.711–0.773).
  • With Priors: Incorporating up to four negative prior exams yields maximal AUCs of 0.770 (95% CI: 0.709–0.834) for the 3P1C scenario. The addition of priors yields marginal but consistent improvements over the single-visit model.
  • Priors Only: Models that exclude the current exam perform less well—maximum AUC observed is ~0.674 with two priors, demonstrating the critical predictive value of recent imaging.
  • Subgroup Stratification: The model demonstrates improved or preserved AUC in several key clinical subgroups.
Scenario AUC (95% CI) Description
Current only (1C) 0.767 (0.702–0.829) Single most recent exam
3 Priors + Current 0.770 (0.709–0.834) Three prior + current exams
Priors only (max) ~0.674 Best priors-only result (2 priors)

AUC values are from population-wide validation sets.

4. Subgroup Analyses

LongiMam's effectiveness is further characterized via stratified analyses by tissue density, age, and temporal evolution in mammographic density.

  • Mammographic Density: For "current visit only" models, AUC in non-dense breasts is 0.822 versus 0.706 in dense breasts. The addition of prior exams preserves or slightly elevates these values.
  • Age: Women <55 years exhibit higher AUCs (e.g., 0.828 for current visit only) compared to those ≥55 years (e.g., 0.748). Adding priors benefits both age groups.
  • Longitudinal Changes: Patients exhibiting shifts in BI-RADS density across exams show longitudinal AUC gains of +0.062 to +0.138 (priors only and priors+current, respectively). This suggests that dynamic changes in dense tissue are a salient marker when leveraged by temporal modeling.
  • The observed subgroup improvements highlight LongiMam's capacity to refine risk in challenging cohorts (dense breasts, older women), where baseline accuracy is often lower in clinical practice.

5. Clinical and Practical Implications

The LongiMam architecture, by integrating historical and current mammograms, effectively models temporal trajectories of breast tissue that may signal latent oncogenic processes.

  • Clinical Utility: Enhanced discrimination supports personalized risk stratification—enabling more targeted screening regimens, reducing overdiagnosis in low-risk women, and improving cancer detection in high-risk groups.
  • Temporal Biomarker Extraction: The recurrent network identifies subtle spatiotemporal phenomena (such as changes in asymmetry or tissue density) not readily accessible through single-exam analysis.
  • Screening Practice: The findings endorse repeated imaging in screening programs, with incremental priors offering value particularly in populations with evolving breast density profiles.

6. Open-Source Availability

LongiMam is distributed as open-source software, available at https://github.com/manelrakez/LongiMam.git

This facilitates external validation, collaborative model extension, and real-world clinical translation. The availability of code ensures transparency and reproducibility, aligning with current standards for machine learning research in healthcare.


LongiMam advances the state of breast cancer risk prediction by combining deep convolutional feature extraction and temporal sequence modeling to harness the full informational content of longitudinal mammography. Empirical results demonstrate that this longitudinal integration, especially when combining current and prior exams, yields higher AUCs relative to single-visit models, including in difficult clinical subgroups. The open-source release allows the broader research community to further develop, validate, and apply the model in diverse screening populations (Rakez et al., 23 Sep 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to LongiMam.