Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Unified Multi-Phase CT Synthesis and Classification Framework for Kidney Cancer Diagnosis with Incomplete Data (2312.05548v1)

Published 9 Dec 2023 in eess.IV, cs.CV, and cs.LG

Abstract: Multi-phase CT is widely adopted for the diagnosis of kidney cancer due to the complementary information among phases. However, the complete set of multi-phase CT is often not available in practical clinical applications. In recent years, there have been some studies to generate the missing modality image from the available data. Nevertheless, the generated images are not guaranteed to be effective for the diagnosis task. In this paper, we propose a unified framework for kidney cancer diagnosis with incomplete multi-phase CT, which simultaneously recovers missing CT images and classifies cancer subtypes using the completed set of images. The advantage of our framework is that it encourages a synthesis model to explicitly learn to generate missing CT phases that are helpful for classifying cancer subtypes. We further incorporate lesion segmentation network into our framework to exploit lesion-level features for effective cancer classification in the whole CT volumes. The proposed framework is based on fully 3D convolutional neural networks to jointly optimize both synthesis and classification of 3D CT volumes. Extensive experiments on both in-house and external datasets demonstrate the effectiveness of our framework for the diagnosis with incomplete data compared with state-of-the-art baselines. In particular, cancer subtype classification using the completed CT data by our method achieves higher performance than the classification using the given incomplete data.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. X. Wang, G. Song, and H. Jiang, “Differentiation of renal angiomyolipoma without visible fat from small clear cell renal cell carcinoma by using specific region of interest on contrast-enhanced ct: a new combination of quantitative tools,” Cancer Imaging, vol. 21, no. 47, 2021
  2. S. Han, S. I. Hwang, and H. J. Lee, “The classification of renal cancer in 3-phase CT images using a deep learning method,” J. Digit. Imaging, vol. 32, no. 4, pp. 3660–3672, 2019.
  3. J. R. Young, D. Margolis, S. Sauk, A. J. Pantuck, J. Sayre, and S. S. Raman, “Clear cell renal cell carcinoma: Discrimination from other renal cell carcinoma subtypes and oncocytoma at multiphasic multidetector ct,”Radiology, vol. 267, no. 2, pp. 444–453, 2013.
  4. A. Raju, C.-T. Cheng, Y. Huo, J. Cai, J. Huang, J. Xiao, L. Lu, C. Liao, and A. P. Harrison, “Co-heterogeneous and adaptive segmentation from multi-source and multi-phase ct imaging data: A study on pathological liver and lesion segmentation,” in Proceedings of the European Conference on Computer Vision, 2020, pp. 448–465.
  5. J. R. Mains, F. Donskov, E. M. Pedersen, H. H. T. Madsen, and F. Rasmussen, “Dynamic contrast-enhanced computed tomography as a potential biomarker in patients with metastatic renal cell carcinoma preliminary results from the danish renal cancer group study-1,” Investig. Radiol., vol. 49, no. 9, pp. 601–607, 2014.
  6. A. Volpe, T. Panzarella, R. A. Rendon, M. A. Haider, F. I. Kondylis, and M. A. S. Jewett, “The natural history of incidentally detected small renal masses,” Cancer, vol. 100, no. 4, pp. 738–745, 2004.
  7. D. J. Brenner and E. J. Hall, “Computed tomography —an increasing source of radiation exposure,” N. Engl. J. Med, vol. 357, no. 22, pp. 2277–2284, 2007.
  8. J. Chen, J.Wei, and R. Li, “TarGAN: Target-aware generative adversarial networks for multi-modality medical image translation,” in Proceedings of the Medical Image Computing and Computer-Assisted Intervention, 2021.
  9. H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller, “Multi-view convolutional neural networks for 3d shape recognition,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 945–953.
  10. L. Yuan, Y. Wang, P. M. Thompson, V. A. Narayan, and J. Ye, “Multi-source feature learning for joint analysis of incomplete multiple heterogeneous neuroimaging data,” NeuroImage, vol. 61, no. 3, pp. 622– 632, 2012.
  11. T. Zhou, M. Liu, K.-H. Thung, and D. Shen, “Latent representation learning for alzheimer’s disease diagnosis with incomplete multimodality neuroimaging and genetic data,” IEEE Transactions on Medical Imaging, vol. 38, no. 10, pp. 2411–2422, 2019.
  12. D. Lee, J. Kim, W.-J. Moon, and J. C. Ye, “CollaGAN: Collaborative GAN for missing image data imputation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2019, pp. 2487–2496.
  13. Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “StarGAN: Unified generative adversarial networks for multi-domain image-toimage translation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2018, pp. 8789–8797.
  14. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of Advances in Neural Information Processing Systems, vol. 27, 2014, pp. 2672–2680.
  15. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp. 1125–1134.
  16. M. Seo, D. Kim, K. Lee, S. Hong, J. S. Bae, J. H. Kim, and S. Kwak, “Neural contrast enhancement of CT image,” in Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Jan.2021, pp. 3973–3982.
  17. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2223–2232.
  18. Z. Zhang, L. Yang, and Y. Zheng, “Translating and segmenting multimodal medical volumes with cycle- and shape-consistency generative adversarial network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 9242–9251.
  19. Y. Choi, Y. Uh, J. Yoo, and J.-W. Ha, “StarGAN v2: Diverse image synthesis for multiple domains,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2020, pp. 8188–8197.
  20. X. Huang, M.-Y. Liu, S. Belongie, and J. Kautz, “Multimodal unsupervised image-to-image translation,” in Proceedings of the European Conference on Computer Vision, 2018, pp. 179–196.
  21. T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim, “Learning to discover cross-domain relations with generative adversarial networks,” in Proceedings of the International Conference on Machine Learning, vol. 70, 2017, pp. 1857–1865.
  22. J.-Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman, “Toward multimodal image-to-image translation,” in Proceedings of the Advances in Neural Information Processing Systems, vol. 30, 2017, pp. 465–476.
  23. J. Yoon, J. Jordon, and M. van der Schaar, “RadialGAN: Leveraging multiple datasets to improve target-specific predictive models using generative adversarial networks,” in Proceedings of the International Conference on Machine Learning, vol. 80, 2018, pp. 5699–5707.
  24. M. Frid-Adar, I. Diamant, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan, “Gan-based synthetic medical image augmentation for increased cnn performance in liver lesion classification,” Neurocomputing, vol. 321, pp. 321–331, 2018.
  25. L. H, L. H, H. H, B. H, L. JS, and K. J., “Classification of focal liver lesions in ct images using convolutional neural networks with lesion information augmented patches and synthetic data augmentation,” Med. Phys., vol. 321, pp. 5029–5046, 2021.
  26. ——, “Synthetic ct image generation of shape-controlled lung cancer using semi-conditional infogan and its applicability for type classification,” Int. J. Comput. Assist. Radiol. Surg., vol. 16, pp. 241–251, 2021.
  27. S. U. Dar, M. Yurt, L. Karacan, A. Erdem, E. Erdem, and T. C¸ ukur, “Image synthesis in multi-contrast MRI with conditional generative adversarial networks,” IEEE Trans. Med. Imag., vol. 38, no. 10, pp. 2375–2388, 2019.
  28. A. Sharma and G. Hamarneh, “Missing MRI pulse sequence synthesis using multi-modal generative adversarial network,” IEEE Trans. Med. Imag., vol. 39, no. 4, pp. 1170–1183, 2020.
  29. B. Yu, L. Zhou, L. Wang, Y. Shi, J. Fripp, and P. Bourgeat, “Ea-GANs: Edge-aware generative adversarial networks for cross-modality MR image synthesis,” IEEE Trans. Med. Imag., vol. 38, no. 7, pp. 1750–1762, 2019.
  30. L. Shen, W. Zhu, X. Wang, L. Xing, J. M. Pauly, B. Turkbey, S. A. Harmon, T. H. Sanford, S. Mehralivand, P. L. Choyke, B. J. Wood, and D. Xu, “Multi-domain image completion for random missing input data,” IEEE Trans. Med. Imag., vol. 40, no. 4, pp. 1113–1122, 2021.
  31. J. Liu, Y. Tian, A. M. A˘gıldere, K. M. Haberal, M. Cos¸kun, C. Duzgol, and O. Akin, “DyeFreeNet: Deep virtual contrast CT synthesis,” in Proceedings of the International Workshop on Simulation and Synthesis in Medical Imaging, 2020, pp. 80–89.
  32. Y. Huo, Z. Xu, H. Moon, S. Bao, A. Assad, T. K. Moyo, M. R. Savona, R. G. Abramson, and B. A. Landman, “SynSeg-Net: Synthetic segmentation without target modality ground truth,” IEEE Trans. Med. Imag., vol. 38, no. 4, pp. 1016–1025, 2019.
  33. D. Nie, R. Trullo, J. Lian, C. Petitjean, S. Ruan, Q. Wang, and D. Shen, “Medical image synthesis with context-aware generative adversarial networks,” in Proceedings of the Medical Image Computing and Computer Assisted Intervention, 2017, pp. 417–425.
  34. J. M. Wolterink, A. M. Dinkla, M. H. F. Savenije, P. R. Seevinck, C. A. T. van den Berg, and I. Iˇsgum, “Deep MR to CT synthesis using unpaired data,” in Proceedings of the Simulation and Synthesis in Medical Imaging, 2017, pp. 14–23.
  35. N. Schieda, K. Nguyen, R. E. Thornhill, M. D. F. McInnes, M. Wu, and N. James, “Importance of phase enhancement for machine learning classification of solid renal masses using texture analysis features at multi-phasic CT,” Abdom. Radiol., vol. 45, p. 2786–2796, 2020.
  36. M.-W. You, N. Kim, and H. Choi, “The value of quantitative CT texture analysis in differentiation of angiomyolipoma without visible fat from clear cell renal cell carcinoma on four-phase contrast-enhanced CT images,” Clin. Radiol., vol. 74, no. 7, pp. 547–554, 2019.
  37. Y. Huo, J. Cai, C.-T. Cheng, A. Raju, K. Yan, B. A. Landman, J. Xiao, L. Lu, C.-H. Liao, and A. P. Harrison, “Harvesting, detecting, and characterizing liver lesions from large-scale multi-phase CT data via deep dynamic texture learning,” 2020.
  38. D. Liang, L. Lin, H. Hu, Q. Zhang, Q. Chen, Y. lwamoto, X. Han, and Y.-W. Chen, “Combining convolutional and recurrent neural networks for classification of focal liver lesions in multi-phase CT images,” in Proceedings of the Medical Image Computing and Computer Assisted Intervention, 2018.
  39. J. Zhou, W. Wang, B. Lei, W. Ge, Y. Huang, L. Zhang, Y. Yan, D. Zhou, Y. Ding, J. Wu, and W. Wang, “Automatic detection and classification of focal liver lesions based on deep convolutional neural networks: A preliminary study,” Front. Oncol., vol. 10, p. 3261, 2021.
  40. H. Coy, K. Hsieh, W. Wu, M. B. Nagarajan, J. R. Young, M. L. Douek, M. S. Brown, F. Scalzo, and S. S. Raman, “Deep learning and radiomics: the utility of Google TensorFlow™ Inception in classifying clear cell renal cell carcinoma and oncocytoma on multiphasic CT,” Abdom. Radiol., vol. 44, pp. 2009–2020, 2019.
  41. A. Oberai, B. Varghese, S. Cen, T. Angelini, D. Hwang, I. Gill, M. Aron, C. Lau, and V. Duddalwar, “Deep learning based classification of solid lipid-poor contrast enhancing renal masses using contrast enhanced CT,” Br. J. Radiol. Suppl., vol. 93, no. 1111, p. 20200002, 2020.
  42. K.-H. Uhm, S.-W. Jung, M. H. Choi, H.-K. Shin, J.-I. Yoo, S. W. Oh, J. Y. Kim, H. G. Kim, Y. J. Lee, S. Y. Youn, S.-H. Hong, and S.-J. Ko, “Deep learning for end-to-end kidney cancer diagnosis on multi-phase abdominal computed tomography,” npj Precis. Onc., vol. 5, no. 54, 2021.
  43. O¨ . C¸ ic¸ek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: Learning dense volumetric segmentation from sparse annotation,” in Proceedings of the Medical Image Computing and Computer- Assisted Intervention, 2016, pp. 424–432.
  44. A. Odena, V. Dumoulin, and C. Olah, “Deconvolution and checkerboard artifacts,” Distill, 2016.
  45. K. Wang, J. H. Liew, Y. Zou, D. Zhou, and J. Feng, “PANet: Few-shot image semantic segmentation with prototype alignment,” in Proceedings of the IEEE International Conference on Computer Vision, October 2019, pp. 9197–9206.
  46. X. Zhang, Y. Wei, Y. Yang, and T. S. Huang, “SG-One: Similarity guidance network for one-shot semantic segmentation,” IEEE Trans. Cybern., vol. 50, no. 9, pp. 3855–3865, 2020.
  47. X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley, “Least squares generative adversarial networks,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2813– 2821.
  48. F. Milletari, N. Navab, and S. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in Proceedings of the International Conference on 3D Vision, 2016, pp. 565– 571.
  49. K. Clark, B. Vendt, K. Smith, J. Freymann, J. Kirby, P. Koppel, S. Moore, S. Phillips, D. Maffitt, M. Pringle, L. Tarbox, and F. Prior, “The cancer imaging archive (TCIA): Maintaining and operating a public information repository,” J. Digit. Imag., vol. 26, no. 6, pp. 1045–1057, 2013.
  50. M. P. Heinrich, M. Jenkinson, M. Brady, and J. A. Schnabel, “MRF based deformable registration and ventilation estimation of lung CT,” IEEE Trans. Med. Imag., vol. 32, no. 7, pp. 1239–1248, 2019.
  51. D. Ulyanov, A. Vedaldi, and V. S. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” CoRR, vol. abs/1607.08022, 2016.
  52. A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proceedings of the ICML Workshop on Deep Learning for Audio, Speech and Language Processing, 2013.
  53. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of the International Conference on Learning Representations, 2015.
  54. X. Huang and S. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 1501–1510.
  55. S. A. Lee-Felker, E. R. Felker, N. Tan, D. J. A. Margolis, J. R. Young, J. Sayre, and S. S. Raman, “Qualitative and quantitative MDCT features for differentiating clear cell renal cell carcinoma from other solid renal cortical masses,” Am. J. Roentgenol., vol. 203, no. 5, pp. 517–524, 2014.
Citations (8)

Summary

We haven't generated a summary for this paper yet.