Benchmarking PathCLIP for Pathology Image Analysis (2401.02651v3)
Abstract: Accurate image classification and retrieval are of importance for clinical diagnosis and treatment decision-making. The recent contrastive language-image pretraining (CLIP) model has shown remarkable proficiency in understanding natural images. Drawing inspiration from CLIP, PathCLIP is specifically designed for pathology image analysis, utilizing over 200,000 image and text pairs in training. While the performance the PathCLIP is impressive, its robustness under a wide range of image corruptions remains unknown. Therefore, we conduct an extensive evaluation to analyze the performance of PathCLIP on various corrupted images from the datasets of Osteosarcoma and WSSS4LUAD. In our experiments, we introduce seven corruption types including brightness, contrast, Gaussian blur, resolution, saturation, hue, and markup at four severity levels. Through experiments, we find that PathCLIP is relatively robustness to image corruptions and surpasses OpenAI-CLIP and PLIP in zero-shot classification. Among the seven corruptions, blur and resolution can cause server performance degradation of the PathCLIP. This indicates that ensuring the quality of images is crucial before conducting a clinical test. Additionally, we assess the robustness of PathCLIP in the task of image-image retrieval, revealing that PathCLIP performs less effectively than PLIP on Osteosarcoma but performs better on WSSS4LUAD under diverse corruptions. Overall, PathCLIP presents impressive zero-shot classification and retrieval performance for pathology images, but appropriate care needs to be taken when using it. We hope this study provides a qualitative impression of PathCLIP and helps understand its differences from other CLIP models.
- Chen, C.-L., Chen, C.-C., Yu, W.-H., Chen, S.-H., Chang, Y.-C., Hsu, T.-I., Hsiao, M., Yeh, C.-Y., Chen, C.-Y.: An annotation-free whole-slide training approach to pathological classification of lung cancer types using deep learning. Nature communications 12(1), 1193 (2021) Fremond et al. [2023] Fremond, S., Andani, S., Wolf, J.B., Dijkstra, J., Melsbach, S., Jobsen, J.J., Brinkhuis, M., Roothaan, S., Jurgenliemk-Schulz, I., Lutgens, L.C., et al.: Interpretable deep learning model to predict the molecular classification of endometrial cancer from haematoxylin and eosin-stained whole-slide images: a combined analysis of the portec randomised trials and clinical cohorts. The Lancet Digital Health 5(2), 71–82 (2023) Wang et al. [2022] Wang, C.-W., Huang, S.-C., Lee, Y.-C., Shen, Y.-J., Meng, S.-I., Gaol, J.L.: Deep learning for bone marrow cell detection and classification on whole-slide images. Medical Image Analysis 75, 102270 (2022) Shui et al. [2023] Shui, Z., Zheng, S., Yu, X., Zhang, S., Li, H., Li, J., Yang, L.: Deformable proposal-aware p2pnet: A universal network for cell recognition under point supervision. arXiv preprint arXiv:2303.02602 (2023) Saltz et al. [2018] Saltz, J., Gupta, R., Hou, L., Kurc, T., Singh, P., Nguyen, V., Samaras, D., Shroyer, K.R., Zhao, T., Batiste, R., et al.: Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell reports 23(1), 181–193 (2018) Li et al. [2020] Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Fremond, S., Andani, S., Wolf, J.B., Dijkstra, J., Melsbach, S., Jobsen, J.J., Brinkhuis, M., Roothaan, S., Jurgenliemk-Schulz, I., Lutgens, L.C., et al.: Interpretable deep learning model to predict the molecular classification of endometrial cancer from haematoxylin and eosin-stained whole-slide images: a combined analysis of the portec randomised trials and clinical cohorts. The Lancet Digital Health 5(2), 71–82 (2023) Wang et al. [2022] Wang, C.-W., Huang, S.-C., Lee, Y.-C., Shen, Y.-J., Meng, S.-I., Gaol, J.L.: Deep learning for bone marrow cell detection and classification on whole-slide images. Medical Image Analysis 75, 102270 (2022) Shui et al. [2023] Shui, Z., Zheng, S., Yu, X., Zhang, S., Li, H., Li, J., Yang, L.: Deformable proposal-aware p2pnet: A universal network for cell recognition under point supervision. arXiv preprint arXiv:2303.02602 (2023) Saltz et al. [2018] Saltz, J., Gupta, R., Hou, L., Kurc, T., Singh, P., Nguyen, V., Samaras, D., Shroyer, K.R., Zhao, T., Batiste, R., et al.: Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell reports 23(1), 181–193 (2018) Li et al. [2020] Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Wang, C.-W., Huang, S.-C., Lee, Y.-C., Shen, Y.-J., Meng, S.-I., Gaol, J.L.: Deep learning for bone marrow cell detection and classification on whole-slide images. Medical Image Analysis 75, 102270 (2022) Shui et al. [2023] Shui, Z., Zheng, S., Yu, X., Zhang, S., Li, H., Li, J., Yang, L.: Deformable proposal-aware p2pnet: A universal network for cell recognition under point supervision. arXiv preprint arXiv:2303.02602 (2023) Saltz et al. [2018] Saltz, J., Gupta, R., Hou, L., Kurc, T., Singh, P., Nguyen, V., Samaras, D., Shroyer, K.R., Zhao, T., Batiste, R., et al.: Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell reports 23(1), 181–193 (2018) Li et al. [2020] Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Shui, Z., Zheng, S., Yu, X., Zhang, S., Li, H., Li, J., Yang, L.: Deformable proposal-aware p2pnet: A universal network for cell recognition under point supervision. arXiv preprint arXiv:2303.02602 (2023) Saltz et al. [2018] Saltz, J., Gupta, R., Hou, L., Kurc, T., Singh, P., Nguyen, V., Samaras, D., Shroyer, K.R., Zhao, T., Batiste, R., et al.: Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell reports 23(1), 181–193 (2018) Li et al. [2020] Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Saltz, J., Gupta, R., Hou, L., Kurc, T., Singh, P., Nguyen, V., Samaras, D., Shroyer, K.R., Zhao, T., Batiste, R., et al.: Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell reports 23(1), 181–193 (2018) Li et al. [2020] Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Fremond, S., Andani, S., Wolf, J.B., Dijkstra, J., Melsbach, S., Jobsen, J.J., Brinkhuis, M., Roothaan, S., Jurgenliemk-Schulz, I., Lutgens, L.C., et al.: Interpretable deep learning model to predict the molecular classification of endometrial cancer from haematoxylin and eosin-stained whole-slide images: a combined analysis of the portec randomised trials and clinical cohorts. The Lancet Digital Health 5(2), 71–82 (2023) Wang et al. [2022] Wang, C.-W., Huang, S.-C., Lee, Y.-C., Shen, Y.-J., Meng, S.-I., Gaol, J.L.: Deep learning for bone marrow cell detection and classification on whole-slide images. Medical Image Analysis 75, 102270 (2022) Shui et al. [2023] Shui, Z., Zheng, S., Yu, X., Zhang, S., Li, H., Li, J., Yang, L.: Deformable proposal-aware p2pnet: A universal network for cell recognition under point supervision. arXiv preprint arXiv:2303.02602 (2023) Saltz et al. [2018] Saltz, J., Gupta, R., Hou, L., Kurc, T., Singh, P., Nguyen, V., Samaras, D., Shroyer, K.R., Zhao, T., Batiste, R., et al.: Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell reports 23(1), 181–193 (2018) Li et al. [2020] Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Wang, C.-W., Huang, S.-C., Lee, Y.-C., Shen, Y.-J., Meng, S.-I., Gaol, J.L.: Deep learning for bone marrow cell detection and classification on whole-slide images. Medical Image Analysis 75, 102270 (2022) Shui et al. [2023] Shui, Z., Zheng, S., Yu, X., Zhang, S., Li, H., Li, J., Yang, L.: Deformable proposal-aware p2pnet: A universal network for cell recognition under point supervision. arXiv preprint arXiv:2303.02602 (2023) Saltz et al. [2018] Saltz, J., Gupta, R., Hou, L., Kurc, T., Singh, P., Nguyen, V., Samaras, D., Shroyer, K.R., Zhao, T., Batiste, R., et al.: Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell reports 23(1), 181–193 (2018) Li et al. [2020] Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Shui, Z., Zheng, S., Yu, X., Zhang, S., Li, H., Li, J., Yang, L.: Deformable proposal-aware p2pnet: A universal network for cell recognition under point supervision. arXiv preprint arXiv:2303.02602 (2023) Saltz et al. [2018] Saltz, J., Gupta, R., Hou, L., Kurc, T., Singh, P., Nguyen, V., Samaras, D., Shroyer, K.R., Zhao, T., Batiste, R., et al.: Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell reports 23(1), 181–193 (2018) Li et al. [2020] Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Saltz, J., Gupta, R., Hou, L., Kurc, T., Singh, P., Nguyen, V., Samaras, D., Shroyer, K.R., Zhao, T., Batiste, R., et al.: Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell reports 23(1), 181–193 (2018) Li et al. [2020] Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Wang, C.-W., Huang, S.-C., Lee, Y.-C., Shen, Y.-J., Meng, S.-I., Gaol, J.L.: Deep learning for bone marrow cell detection and classification on whole-slide images. Medical Image Analysis 75, 102270 (2022) Shui et al. [2023] Shui, Z., Zheng, S., Yu, X., Zhang, S., Li, H., Li, J., Yang, L.: Deformable proposal-aware p2pnet: A universal network for cell recognition under point supervision. arXiv preprint arXiv:2303.02602 (2023) Saltz et al. [2018] Saltz, J., Gupta, R., Hou, L., Kurc, T., Singh, P., Nguyen, V., Samaras, D., Shroyer, K.R., Zhao, T., Batiste, R., et al.: Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell reports 23(1), 181–193 (2018) Li et al. [2020] Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Shui, Z., Zheng, S., Yu, X., Zhang, S., Li, H., Li, J., Yang, L.: Deformable proposal-aware p2pnet: A universal network for cell recognition under point supervision. arXiv preprint arXiv:2303.02602 (2023) Saltz et al. [2018] Saltz, J., Gupta, R., Hou, L., Kurc, T., Singh, P., Nguyen, V., Samaras, D., Shroyer, K.R., Zhao, T., Batiste, R., et al.: Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell reports 23(1), 181–193 (2018) Li et al. [2020] Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Saltz, J., Gupta, R., Hou, L., Kurc, T., Singh, P., Nguyen, V., Samaras, D., Shroyer, K.R., Zhao, T., Batiste, R., et al.: Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell reports 23(1), 181–193 (2018) Li et al. [2020] Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Shui, Z., Zheng, S., Yu, X., Zhang, S., Li, H., Li, J., Yang, L.: Deformable proposal-aware p2pnet: A universal network for cell recognition under point supervision. arXiv preprint arXiv:2303.02602 (2023) Saltz et al. [2018] Saltz, J., Gupta, R., Hou, L., Kurc, T., Singh, P., Nguyen, V., Samaras, D., Shroyer, K.R., Zhao, T., Batiste, R., et al.: Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell reports 23(1), 181–193 (2018) Li et al. [2020] Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Saltz, J., Gupta, R., Hou, L., Kurc, T., Singh, P., Nguyen, V., Samaras, D., Shroyer, K.R., Zhao, T., Batiste, R., et al.: Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell reports 23(1), 181–193 (2018) Li et al. [2020] Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Saltz, J., Gupta, R., Hou, L., Kurc, T., Singh, P., Nguyen, V., Samaras, D., Shroyer, K.R., Zhao, T., Batiste, R., et al.: Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell reports 23(1), 181–193 (2018) Li et al. [2020] Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Li, Z., Zhang, J., Tan, T., Teng, X., Sun, X., Zhao, H., Liu, L., Xiao, Y., Lee, B., Li, Y., et al.: Deep learning methods for lung cancer segmentation in whole-slide histopathology images—the acdc@ lunghp challenge 2019. IEEE Journal of Biomedical and Health Informatics 25(2), 429–440 (2020) Wang et al. [2023] Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Wang, X., Du, Y., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., Huang, J., Han, X.: Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis 83, 102645 (2023) Huang et al. [2023] Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T.J., Zou, J.: A visual–language foundation model for pathology image analysis using medical twitter. Nature medicine 29(9), 2307–2316 (2023) Sun et al. [2023] Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Sun, Y., Zhu, C., Zheng, S., Zhang, K., Shui, Z., Yu, X., Zhao, Y., Li, H., Zhang, Y., Zhao, R., et al.: Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072 (2023) Woerl et al. [2020] Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Woerl, A.-C., Eckstein, M., Geiger, J., Wagner, D.C., Daher, T., Stenzel, P., Fernandez, A., Hartmann, A., Wand, M., Roth, W., et al.: Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. European urology 78(2), 256–264 (2020) Li et al. [2023] Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Li, H., Zhu, C., Zhang, Y., Sun, Y., Shui, Z., Kuang, W., Zheng, S., Yang, L.: Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7454–7463 (2023) Cui et al. [2023] Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Cui, X., Zheng, S., Zhang, W., Fan, S., Wang, J., Song, F., Liu, X., Zhu, W., Ye, Z.: Prediction of histologic types in solid lung lesions using preoperative contrast-enhanced ct. European Radiology, 1–12 (2023) Touvron et al. [2023] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018) Cai et al. [2021] Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Kirillov et al. [2023] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023) Dosovitskiy et al. [2020] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Chen et al. [2022] Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Chen, J., Guo, H., Yi, K., Li, B., Elhoseiny, M.: Visualgpt: Data-efficient adaptation of pretrained language models for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18030–18040 (2022) Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Yang et al. [2023] Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., Wang, L.: The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 9(1) (2023) Yan et al. [2023] Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Yan, Z., Zhang, K., Zhou, R., He, L., Li, X., Sun, L.: Multimodal chatgpt for medical applications: an experimental study of gpt-4v. arXiv preprint arXiv:2310.19061 (2023) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Schuhmann et al. [2022] Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278–25294 (2022) Agarwal et al. [2021] Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Agarwal, S., Krueger, G., Clark, J., Radford, A., Kim, J.W., Brundage, M.: Evaluating clip: towards characterization of broader capabilities and downstream implications. arXiv preprint arXiv:2108.02818 (2021) [27] Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Galindo, Y., Faria, F.A.: Understanding clip robustness Zheng et al. [2022] Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Zheng, S., Li, J., Shui, Z., Zhu, C., Zhang, Y., Chen, P., Yang, L.: Chrsnet: Chromosome straightening using self-attention guided networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 119–128 (2022). Springer Jing et al. [2023] Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Jing, X., Dorrius, M.D., Zheng, S., Wielema, M., Oudkerk, M., Sijens, P.E., Ooijen, P.M.: Localization of contrast-enhanced breast lesions in ultrafast screening mri using deep convolutional neural networks. European Radiology, 1–9 (2023) Zheng et al. [2023] Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Zheng, S., Guo, J., Langendijk, J.A., Both, S., Veldhuis, R.N., Oudkerk, M., Ooijen, P.M., Wijsman, R., Sijtsema, N.M.: Survival prediction for stage i-iiia non-small cell lung cancer using deep learning. Radiotherapy and oncology 180, 109483 (2023) Zhang et al. [2022] Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Zhang, Y., Sun, Y., Li, H., Zheng, S., Zhu, C., Yang, L.: Benchmarking the robustness of deep neural networks to common corruptions in digital pathology. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 242–252 (2022). Springer Zhang et al. [2020] Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Zhang, S., Ni, Q., Li, B., Jiang, S., Cai, W., Chen, H., Luo, L.: Corruption-robust enhancement of deep neural networks for classification of peripheral blood smear images. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 372–381 (2020). Springer Huang et al. [2023] Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Huang, P., Zhang, S., Gan, Y., Xu, R., Zhu, R., Qin, W., Guo, L., Jiang, S., Luo, L.: Assessing and enhancing robustness of deep learning models with corruption emulation in digital pathology. arXiv preprint arXiv:2310.20427 (2023) Mishra et al. [2017] Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: Histopathological diagnosis for viable and non-viable tumor prediction for osteosarcoma using convolutional neural network. In: Bioinformatics Research and Applications: 13th International Symposium, ISBRA 2017, Honolulu, HI, USA, May 29–June 2, 2017, Proceedings 13, pp. 12–23 (2017). Springer Leavey et al. [2019] Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Leavey, P., Sengupta, A., Rakheja, D., Daescu, O., Arunachalam, H., Mishra, R.: Osteosarcoma data from ut southwestern/ut dallas for viable and necrotic tumor assessment [data set]. Cancer Imaging Arch 14 (2019) Han et al. [2022] Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Han, C., Lin, J., Mai, J., Wang, Y., Zhang, Q., Zhao, B., Chen, X., Pan, X., Shi, Z., Xu, Z., et al.: Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Medical Image Analysis 80, 102487 (2022) Qiao et al. [2023] Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Qiao, Y., Zhang, C., Kang, T., Kim, D., Tariq, S., Zhang, C., Hong, C.S.: Robustness of sam: Segment anything under corruptions and beyond. arXiv preprint arXiv:2306.07713 (2023) Zhang et al. [2023] Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Zhang, C., Zhang, C., Kang, T., Kim, D., Bae, S.-H., Kweon, I.S.: Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866 (2023) Tellez et al. [2019] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.-M., Ciompi, F., Van Der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical image analysis 58, 101544 (2019) Takahashi et al. [2019] Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019) Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
- Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology 30(9), 2917–2931 (2019)
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.