Papers
Topics
Authors
Recent
Search
2000 character limit reached

LUMPNet: Deep Learning in Nodule Detection

Updated 12 January 2026
  • LUMPNet is a deep-learning framework that combines advanced image and tactile pipelines to detect disease-indicative nodules in veterinary and clinical settings.
  • The system employs state-of-the-art models like YOLOv11, EfficientNet, and InceptionTime, enhanced by custom optimizers such as AWDR, to improve detection sensitivity and specificity.
  • LUMPNet demonstrates marked performance gains over conventional methods by reducing false negatives and achieving high accuracy in both lumpy skin disease and breast lump detection.

LUMPNet denotes distinct deep-learning–based frameworks for medical nodule/lump detection, explicitly named and described in recent literature for two applications: (1) image-based early detection of lumpy skin disease (LSD) in cattle using a hybrid computer vision pipeline (Ubaidullah et al., 5 Jan 2026), and (2) detection and localization of breast lumps via a tactile glove and time-series modeling (Syrymova et al., 15 Feb 2025). Both systems are unified by the adoption of advanced neural architectures and rigorous multi-stage workflows for high-accuracy detection of disease-indicative nodules.

1. System Architectures

Lumpy Skin Disease Detection Pipeline

LUMPNet for LSD is a two-stage, image-centric deep learning pipeline constructed as follows (Ubaidullah et al., 5 Jan 2026):

  • Input: 640×640×3640\times640\times3 RGB cattle images.
  • Segmentation: A hybrid of manual masking and the Segment Anything Model (SAM) isolates the animal; backgrounds are recolored for invariant preprocessing.
  • Feature Construction: Three feature maps are derived—edge map EE, masked RGB CC, and segmentation mask BB—concatenated as F=[E,C,B]F = [E, C, B].
  • YOLOv11 Detector: FF is processed by YOLOv11, which comprises C3k2 blocks, SPPF, C2PSA attention, and PANet-based neck with three detection heads (spatial scales: 80×8080\times80, 40×4040\times40, 20×2020\times20), outputting bounding boxes around candidate skin nodules.
  • EfficientNet Classifier: Detected crops are rescaled and input to an EfficientNet-B0 network that assigns binary labels ("LSD-affected" or "healthy") to each region.
  • Decision Aggregation: If any crop is classified as "LSD-affected", the source image is marked positive.

Tactile Breast Lump Detection Pipeline

LUMPNet for breast lumps fuses a hardware multi-sensor tactile glove with a 1D convolutional neural architecture (Syrymova et al., 15 Feb 2025):

  • Sensor Hardware: A flexible glove with 30 pressure channels (TakkStrip 2 modules) and two MEMS accelerometers captures spatiotemporal force signals during palpation of silicone breast phantoms.
  • Data Preprocessing: Each palpation yields a 15×112015\times1120 tensor (pressure channels ×\times time). Channels are min–max normalized; missing samples are mean-imputed.
  • Neural Model: The InceptionTime network forms the backbone. Each stack consists of a 1×11\times1 bottleneck conv, three parallel fk×1f_k\times1 convs (fk=10,20,40f_k=10,20,40), and a max pool, with residual skip connections and batch normalization.
  • Prediction Heads: In single-task (STL), sigmoid output for lump presence; in multi-task (MTL), separate softmax heads for presence (2-way), lump size (3-way), and lump position (3-way).

2. Optimization Strategies

In the LSD framework, a novel Adaptive Weighted Decay–RMSProp (AWDR) optimizer is used:

  • At epoch tt of TT, blending coefficient β(t)=β0(1tT)\beta(t) = \beta_0 (1 - \frac{t}{T}) interpolates between RMSProp and AdamW: Δθt=β(t)utRMS+(1β(t))utAdamW\Delta\theta_t = \beta(t) u_t^{\mathrm{RMS}} + (1-\beta(t)) u_t^{\mathrm{AdamW}}.
  • RMSProp and AdamW updates proceed with standard moving averages and decoupled weight decay.
  • The strategy confers RMSProp's smoothing in early stages and AdamW's regularization late, stabilizing the training of both YOLOv11 and EfficientNet modules.

The tactile pipeline employs the Adam optimizer with early stopping based on validation loss. In transfer learning, weights pre-trained on naive user data are fine-tuned using a reduced learning rate on specialist-collected data (Syrymova et al., 15 Feb 2025).

3. Training Protocols and Datasets

  • Dataset: 1,024 images from Kaggle “Lumpy Skin Images” (324 affected, 700 healthy).
  • Split: Training: 724 (500 healthy, 224 affected); Testing: 300 (200 healthy, 100 affected).
  • Batch Size & Epochs: 16; 20 epochs with OneCycleLR scheduling and initial 1×1041 \times 10^{-4} learning rate.
  • Augmentation: Segmentation-based background masking; no nonstandard normalization applied.
  • Dataset: Silicone breast prototypes (9 with lumps, 4 without); 10 naive users plus an oncologist.
  • Collection: Each naive user performs $576$ trials (circular fingertip palpation); oncologist completes $288$.
  • Split: User-Level (UL) and Within-User (WU).
  • Batch Size & Epochs: 32, up to 100 (early stopping at patience 10).

4. Losses and Objective Functions

Subsystem Loss Function Mathematical Form
YOLOv11 (LSD) Detection loss LYOLO=λboxLCIoU+λdflLDFL+λclsLBCE\mathcal{L}_{YOLO} = \lambda_{box} \mathcal{L}_{CIoU} + \lambda_{dfl} \mathcal{L}_{DFL} + \lambda_{cls} \mathcal{L}_{BCE}
EfficientNet (LSD) Binary cross-entropy standard binary BCE
InceptionTime (Tactile) Binary/multiclass BCE, CE Lp,Ls,Ld\mathcal{L}_p, \mathcal{L}_s, \mathcal{L}_d (sum in MTL)

LCIoU\mathcal{L}_{CIoU} and LDFL\mathcal{L}_{DFL} follow definitions as provided in YOLOv11, penalizing bounding box misalignments and distributional prediction errors. In multi-task tactile learning, total loss is summed over presence, size, and position heads.

5. Performance Evaluation

Over 20 epochs, LUMPNet achieves:

  • Training Accuracy: 99%
  • Validation Accuracy: 98%
  • Per-Class Test Metrics:
    • Healthy: Precision = 1.00, Recall = 1.00, F1 = 1.00
    • Affected: Precision = 0.99, Recall = 0.98, F1 = 0.99
  • Confusion Matrix: (2000 298)\begin{pmatrix}200 & 0 \ 2 & 98\end{pmatrix}
  • Macro F1: 0.995

Against other ML/DL baselines:

Model AUC Accuracy F1
Random Forest 0.995 0.977 0.977
Adaboost 0.972 0.972 0.972
LUMPNet 0.9968 0.9968 0.990

An EfficientNet-B0+AdamW baseline achieves only 83.77% validation accuracy and macro F1 of 78.72% (test accuracy 78.57%), with higher false negative rates and slower inference (837.7 ms/image).

In Multi-Task User-Level Evaluation:

  • Lump presence: 82.22%
  • Size: 67.08%
  • Position: 62.63%

Transfer learning to oncologist user (after 15 trials):

  • Presence: 95.01%
  • Size: 88.54%
  • Position: 82.98%

InceptionTime outperforms other DL baselines (XceptionTime: 94.3%; ResNet: 92.0%; LSTM/BiLSTM: \sim83.7% on binary detection).

Compared to manual palpation, LUMPNet improves presence detection accuracy by 10–15% and size/location classification by 20–30% on phantoms.

6. Comparative Significance and Applications

LUMPNet's YOLOv11+EfficientNet+AWDR ensemble for LSD detection demonstrates the efficacy of multi-stage pipelines with compound scaling and dynamic optimizer blending to handle small object localization and subtle class discrimination in medical imagery, resulting in an order-of-magnitude reduction in false negatives compared to single-pass classifiers such as EfficientNet-B0+AdamW (Ubaidullah et al., 5 Jan 2026).

The tactile LUMPNet system reveals the feasibility of pairing high-density, glove-based pressure sensing with 1D deep convolutional architectures and transfer learning to generalize from naive to specialist users and approach the performance required for clinical applications in breast self- and clinical examination—while highlighting challenges of realistic tissue modeling and generalizability (Syrymova et al., 15 Feb 2025).

A plausible implication is that the integration of dedicated localization (object detection or tactile focus) and modular network architectures, optimized via tailored learning-rate and optimization schemes, is critical for high-specificity, high-sensitivity medical screening systems in low-data or sensor-constrained domains.

7. Limitations and Future Directions

Both LUMPNet systems, despite high performance on curated, task-specific datasets, are subject to limitations inherent in data representativeness and the gap to real-world deployments:

  • The LSD pipeline requires annotated images with strong nodule visibility and accurate segmentation masks; robustness to nonstandard imaging and unseen lesion presentations remains untested (Ubaidullah et al., 5 Jan 2026).
  • The tactile pipeline's validation is restricted to silicone breast phantoms, which only coarsely approximate the viscoelastic and structural heterogeneity of live tissue; in vivo performance, motion artifact resistance, and broader demographic applicability have yet to be established (Syrymova et al., 15 Feb 2025).

Priority future directions include:

  • Extension to more diverse, real-world datasets and clinical scenarios.
  • Hybrid architectures incorporating regression-based depth estimation and self-supervised pretraining for limited label regimes.
  • Embedded or edge deployment for on-site real-time decision support in both veterinary and human healthcare.
  • Augmentation of tactile sensing with haptic feedback and AR-guided self-exam procedures.

Both frameworks underscore the advancing trend of task-specialized deep learning systems that exploit cross-modal (vision, tactile) signals and sophisticated optimization to approach expert-level performance in early disease screening.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to LUMPNet.