Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
91 tokens/sec
Gemini 2.5 Pro Premium
52 tokens/sec
GPT-5 Medium
24 tokens/sec
GPT-5 High Premium
28 tokens/sec
GPT-4o
85 tokens/sec
DeepSeek R1 via Azure Premium
87 tokens/sec
GPT OSS 120B via Groq Premium
478 tokens/sec
Kimi K2 via Groq Premium
221 tokens/sec
2000 character limit reached

Guided Conditional Diffusion Classifier (ConDiff) for Enhanced Prediction of Infection in Diabetic Foot Ulcers (2405.00858v1)

Published 1 May 2024 in cs.CV

Abstract: To detect infected wounds in Diabetic Foot Ulcers (DFUs) from photographs, preventing severe complications and amputations. Methods: This paper proposes the Guided Conditional Diffusion Classifier (ConDiff), a novel deep-learning infection detection model that combines guided image synthesis with a denoising diffusion model and distance-based classification. The process involves (1) generating guided conditional synthetic images by injecting Gaussian noise to a guide image, followed by denoising the noise-perturbed image through a reverse diffusion process, conditioned on infection status and (2) classifying infections based on the minimum Euclidean distance between synthesized images and the original guide image in embedding space. Results: ConDiff demonstrated superior performance with an accuracy of 83% and an F1-score of 0.858, outperforming state-of-the-art models by at least 3%. The use of a triplet loss function reduces overfitting in the distance-based classifier. Conclusions: ConDiff not only enhances diagnostic accuracy for DFU infections but also pioneers the use of generative discriminative models for detailed medical image analysis, offering a promising approach for improving patient outcomes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. K. Järbrink, G. Ni, H. Sönnergren, A. Schmidtchen, C. Pang, R. Bajpai, and J. Car, “The humanistic and economic burden of chronic wounds: a protocol for a systematic review. syst rev. 2017; 6 (1): 15.”
  2. C. K. Sen, G. M. Gordillo, S. Roy, R. Kirsner, L. Lambert, T. K. Hunt, F. Gottrup, G. C. Gurtner, and M. T. Longaker, “Human skin wounds: a major and snowballing threat to public health and the economy,” Wound repair and regeneration, vol. 17, no. 6, pp. 763–771, 2009.
  3. S. J. Landis, “Chronic wound infection and antimicrobial use,” Advances in skin & wound care, vol. 21, no. 11, pp. 531–540, 2008.
  4. J.-L. Richard, A. Sotto, and J.-P. Lavigne, “New insights in diabetic foot infection,” World journal of diabetes, vol. 2, no. 2, p. 24, 2011.
  5. J. L. Mills Sr, M. S. Conte, D. G. Armstrong, F. B. Pomposelli, A. Schanzer, A. N. Sidawy, G. Andros, S. for Vascular Surgery Lower Extremity Guidelines Committee et al., “The society for vascular surgery lower extremity threatened limb classification system: risk stratification based on wound, ischemia, and foot infection (wifi),” Journal of vascular surgery, vol. 59, no. 1, pp. 220–234, 2014.
  6. C. Wang, X. Yan, M. Smith, K. Kochhar, M. Rubin, S. M. Warren, J. Wrobel, and H. Lee, “A unified framework for automatic wound segmentation and analysis with deep convolutional neural networks,” in 2015 37th annual international conference of the ieee engineering in medicine and biology society (EMBC).   IEEE, 2015, pp. 2415–2418.
  7. H. Nejati, H. A. Ghazijahani, M. Abdollahzadeh, T. Malekzadeh, N.-M. Cheung, K.-H. Lee, and L.-L. Low, “Fine-grained wound tissue analysis using deep neural network,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2018, pp. 1010–1014.
  8. M. Goyal, N. D. Reeves, S. Rajbhandari, N. Ahmad, C. Wang, and M. H. Yap, “Recognition of ischaemia and infection in diabetic foot ulcers: Dataset and techniques,” Computers in biology and medicine, vol. 117, p. 103616, 2020.
  9. N. Al-Garaawi, R. Ebsim, A. F. Alharan, and M. H. Yap, “Diabetic foot ulcer classification using mapped binary patterns and convolutional neural networks,” Computers in biology and medicine, vol. 140, p. 105055, 2022.
  10. Z. Liu, J. John, and E. Agu, “Diabetic foot ulcer ischemia and infection classification using efficientnet deep learning models,” IEEE Open Journal of Engineering in Medicine and Biology, vol. 3, pp. 189–201, 2022.
  11. M. H. Yap, B. Cassidy, J. M. Pappachan, C. O’Shea, D. Gillespie, and N. D. Reeves, “Analysis towards classification of infection and ischaemia of diabetic foot ulcers,” in 2021 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI).   IEEE, 2021, pp. 1–4.
  12. A. Galdran, G. Carneiro, and M. A. G. Ballester, “Convolutional nets versus vision transformers for diabetic foot ulcer classification,” in Diabetic Foot Ulcers Grand Challenge.   Springer, 2021, pp. 21–29.
  13. Z. Liu, E. Agu, P. Pedersen, C. Lindsay, B. Tulu, and D. Strong, “Comprehensive assessment of fine-grained wound images using a patch-based cnn with context-preserving attention,” IEEE open journal of engineering in medicine and biology, vol. 2, pp. 224–234, 2021.
  14. Z. Liu, E. Agu et al., “Chronic wound image augmentation and assessment using semi-supervised progressive multi-granularity efficientnet,” OJEMB, 2023.
  15. M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning.   PMLR, 2019, pp. 6105–6114.
  16. R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 10 684–10 695.
  17. C. Meng, Y. He, Y. Song, J. Song, J. Wu, J.-Y. Zhu, and S. Ermon, “Sdedit: Guided image synthesis and editing with stochastic differential equations,” arXiv preprint arXiv:2108.01073, 2021.
  18. F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 815–823.
  19. H. Wang, Z. Wang, M. Du, F. Yang, Z. Zhang, S. Ding, P. Mardziel, and X. Hu, “Score-cam: Score-weighted visual explanations for convolutional neural networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 24–25.
  20. J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in neural information processing systems, vol. 33, pp. 6840–6851, 2020.
  21. J. Song, C. Meng, and S. Ermon, “Denoising diffusion implicit models,” arXiv preprint arXiv:2010.02502, 2020.
  22. L. Zhang, A. Rao, and M. Agrawala, “Adding conditional control to text-to-image diffusion models,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 3836–3847.
  23. A. Nichol, P. Dhariwal, A. Ramesh, P. Shyam, P. Mishkin, B. McGrew, I. Sutskever, and M. Chen, “Glide: Towards photorealistic image generation and editing with text-guided diffusion models,” arXiv preprint arXiv:2112.10741, 2021.
  24. J. Wyatt, A. Leach, S. M. Schmon, and C. G. Willcocks, “Anoddpm: Anomaly detection with denoising diffusion probabilistic models using simplex noise,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 650–656.
  25. J. Wu, R. Fu, H. Fang, Y. Zhang, Y. Yang, H. Xiong, H. Liu, and Y. Xu, “Medsegdiff: Medical image segmentation with diffusion probabilistic model,” arXiv preprint arXiv:2211.00611, 2022.
  26. M. Özbey, O. Dalmaz, S. U. Dar, H. A. Bedel, Ş. Özturk, A. Güngör, and T. Çukur, “Unsupervised medical image translation with adversarial diffusion models,” IEEE Transactions on Medical Imaging, 2023.
  27. J. Wang, J. Levman, W. H. L. Pinaya, P.-D. Tudosiu, M. J. Cardoso, and R. Marinescu, “Inversesr: 3d brain mri super-resolution using a latent diffusion model,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2023, pp. 438–447.
  28. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
  29. C. M. Bishop and H. Bishop, “Deep neural networks,” in Deep Learning: Foundations and Concepts.   Springer, 2023, pp. 171–207.
  30. P. Dhariwal and A. Nichol, “Diffusion models beat gans on image synthesis,” Advances in neural information processing systems, vol. 34, pp. 8780–8794, 2021.
  31. J. Ho and T. Salimans, “Classifier-free diffusion guidance,” arXiv preprint arXiv:2207.12598, 2022.
  32. C. Schuhmann, R. Vencu, R. Beaumont, R. Kaczmarczyk, C. Mullis, A. Katta, T. Coombes, J. Jitsev, and A. Komatsuzaki, “Laion-400m: Open dataset of clip-filtered 400 million image-text pairs,” arXiv preprint arXiv:2111.02114, 2021.
  33. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  34. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.
  35. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
  36. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  37. H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. Jégou, “Training data-efficient image transformers & distillation through attention,” in International conference on machine learning.   PMLR, 2021, pp. 10 347–10 357.
  38. Z. Liu, H. Hu, Y. Lin, Z. Yao, Z. Xie, Y. Wei, J. Ning, Y. Cao, Z. Zhang, L. Dong et al., “Swin transformer v2: Scaling up capacity and resolution,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 12 009–12 019.
  39. Y. Li, G. Yuan, Y. Wen, J. Hu, G. Evangelidis, S. Tulyakov, Y. Wang, and J. Ren, “Efficientformer: Vision transformers at mobilenet speed,” Advances in Neural Information Processing Systems, vol. 35, pp. 12 934–12 949, 2022.
  40. M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784, 2014.
  41. A. Sauer, D. Lorenz, A. Blattmann, and R. Rombach, “Adversarial diffusion distillation,” arXiv preprint arXiv:2311.17042, 2023.
  42. B. Ay, B. Tasar, Z. Utlu, K. Ay, and G. Aydin, “Deep transfer learning-based visual classification of pressure injuries stages,” Neural Computing and Applications, vol. 34, no. 18, pp. 16 157–16 168, 2022.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube